Dec 13 06:42:07 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 13 06:42:07 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 13 06:42:07 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 06:42:07 localhost kernel: BIOS-provided physical RAM map:
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 06:42:07 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Dec 13 06:42:07 localhost kernel: NX (Execute Disable) protection: active
Dec 13 06:42:07 localhost kernel: APIC: Static calls initialized
Dec 13 06:42:07 localhost kernel: SMBIOS 2.8 present.
Dec 13 06:42:07 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Dec 13 06:42:07 localhost kernel: Hypervisor detected: KVM
Dec 13 06:42:07 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 06:42:07 localhost kernel: kvm-clock: using sched offset of 3363036111 cycles
Dec 13 06:42:07 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 06:42:07 localhost kernel: tsc: Detected 2445.404 MHz processor
Dec 13 06:42:07 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 06:42:07 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 06:42:07 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Dec 13 06:42:07 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 13 06:42:07 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 06:42:07 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Dec 13 06:42:07 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Dec 13 06:42:07 localhost kernel: Using GB pages for direct mapping
Dec 13 06:42:07 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 13 06:42:07 localhost kernel: ACPI: Early table checksum verification disabled
Dec 13 06:42:07 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Dec 13 06:42:07 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:42:07 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:42:07 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:42:07 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Dec 13 06:42:07 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:42:07 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:42:07 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:42:07 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Dec 13 06:42:07 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Dec 13 06:42:07 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Dec 13 06:42:07 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Dec 13 06:42:07 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Dec 13 06:42:07 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Dec 13 06:42:07 localhost kernel: No NUMA configuration found
Dec 13 06:42:07 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Dec 13 06:42:07 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Dec 13 06:42:07 localhost kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Dec 13 06:42:07 localhost kernel: Zone ranges:
Dec 13 06:42:07 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 06:42:07 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 06:42:07 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Dec 13 06:42:07 localhost kernel:   Device   empty
Dec 13 06:42:07 localhost kernel: Movable zone start for each node
Dec 13 06:42:07 localhost kernel: Early memory node ranges
Dec 13 06:42:07 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 06:42:07 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Dec 13 06:42:07 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Dec 13 06:42:07 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Dec 13 06:42:07 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 06:42:07 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 06:42:07 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 13 06:42:07 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 06:42:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 06:42:07 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 06:42:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 06:42:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 06:42:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 06:42:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 06:42:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 06:42:07 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 06:42:07 localhost kernel: TSC deadline timer available
Dec 13 06:42:07 localhost kernel: CPU topo: Max. logical packages:   4
Dec 13 06:42:07 localhost kernel: CPU topo: Max. logical dies:       4
Dec 13 06:42:07 localhost kernel: CPU topo: Max. dies per package:   1
Dec 13 06:42:07 localhost kernel: CPU topo: Max. threads per core:   1
Dec 13 06:42:07 localhost kernel: CPU topo: Num. cores per package:     1
Dec 13 06:42:07 localhost kernel: CPU topo: Num. threads per package:   1
Dec 13 06:42:07 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Dec 13 06:42:07 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 13 06:42:07 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Dec 13 06:42:07 localhost kernel: kvm-guest: setup PV sched yield
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 13 06:42:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 13 06:42:07 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Dec 13 06:42:07 localhost kernel: Booting paravirtualized kernel on KVM
Dec 13 06:42:07 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 06:42:07 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Dec 13 06:42:07 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Dec 13 06:42:07 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Dec 13 06:42:07 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Dec 13 06:42:07 localhost kernel: kvm-guest: PV spinlocks enabled
Dec 13 06:42:07 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Dec 13 06:42:07 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 06:42:07 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 13 06:42:07 localhost kernel: random: crng init done
Dec 13 06:42:07 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 06:42:07 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 06:42:07 localhost kernel: Fallback order for Node 0: 0 
Dec 13 06:42:07 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 13 06:42:07 localhost kernel: Policy zone: Normal
Dec 13 06:42:07 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 06:42:07 localhost kernel: software IO TLB: area num 4.
Dec 13 06:42:07 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Dec 13 06:42:07 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 13 06:42:07 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 13 06:42:07 localhost kernel: Dynamic Preempt: voluntary
Dec 13 06:42:07 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 06:42:07 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 13 06:42:07 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Dec 13 06:42:07 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 06:42:07 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 13 06:42:07 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 06:42:07 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 06:42:07 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Dec 13 06:42:07 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 06:42:07 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 06:42:07 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 06:42:07 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Dec 13 06:42:07 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 06:42:07 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 13 06:42:07 localhost kernel: Console: colour VGA+ 80x25
Dec 13 06:42:07 localhost kernel: printk: console [ttyS0] enabled
Dec 13 06:42:07 localhost kernel: ACPI: Core revision 20230331
Dec 13 06:42:07 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 06:42:07 localhost kernel: x2apic enabled
Dec 13 06:42:07 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 13 06:42:07 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Dec 13 06:42:07 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Dec 13 06:42:07 localhost kernel: kvm-guest: setup PV IPIs
Dec 13 06:42:07 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 06:42:07 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404)
Dec 13 06:42:07 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 13 06:42:07 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 13 06:42:07 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 13 06:42:07 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 06:42:07 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 06:42:07 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 13 06:42:07 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Dec 13 06:42:07 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 06:42:07 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 06:42:07 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 13 06:42:07 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 13 06:42:07 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 13 06:42:07 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Dec 13 06:42:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 06:42:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 06:42:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 06:42:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Dec 13 06:42:07 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 06:42:07 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Dec 13 06:42:07 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Dec 13 06:42:07 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 13 06:42:07 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 13 06:42:07 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 13 06:42:07 localhost kernel: landlock: Up and running.
Dec 13 06:42:07 localhost kernel: Yama: becoming mindful.
Dec 13 06:42:07 localhost kernel: SELinux:  Initializing.
Dec 13 06:42:07 localhost kernel: LSM support for eBPF active
Dec 13 06:42:07 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 06:42:07 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 06:42:07 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Dec 13 06:42:07 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 13 06:42:07 localhost kernel: ... version:                0
Dec 13 06:42:07 localhost kernel: ... bit width:              48
Dec 13 06:42:07 localhost kernel: ... generic registers:      6
Dec 13 06:42:07 localhost kernel: ... value mask:             0000ffffffffffff
Dec 13 06:42:07 localhost kernel: ... max period:             00007fffffffffff
Dec 13 06:42:07 localhost kernel: ... fixed-purpose events:   0
Dec 13 06:42:07 localhost kernel: ... event mask:             000000000000003f
Dec 13 06:42:07 localhost kernel: signal: max sigframe size: 3376
Dec 13 06:42:07 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 13 06:42:07 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 06:42:07 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 13 06:42:07 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 13 06:42:07 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Dec 13 06:42:07 localhost kernel: smp: Brought up 1 node, 4 CPUs
Dec 13 06:42:07 localhost kernel: smpboot: Total of 4 processors activated (19563.23 BogoMIPS)
Dec 13 06:42:07 localhost kernel: node 0 deferred pages initialised in 9ms
Dec 13 06:42:07 localhost kernel: Memory: 7766284K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 617172K reserved, 0K cma-reserved)
Dec 13 06:42:07 localhost kernel: devtmpfs: initialized
Dec 13 06:42:07 localhost kernel: x86/mm: Memory block size: 128MB
Dec 13 06:42:07 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 06:42:07 localhost kernel: futex hash table entries: 1024 (65536 bytes on 1 NUMA nodes, total 64 KiB, linear).
Dec 13 06:42:07 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 06:42:07 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 06:42:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 13 06:42:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 13 06:42:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 13 06:42:07 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 13 06:42:07 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 13 06:42:07 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 06:42:07 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 06:42:07 localhost kernel: audit: type=2000 audit(1765608125.976:1): state=initialized audit_enabled=0 res=1
Dec 13 06:42:07 localhost kernel: cpuidle: using governor menu
Dec 13 06:42:07 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 06:42:07 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Dec 13 06:42:07 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Dec 13 06:42:07 localhost kernel: PCI: Using configuration type 1 for base access
Dec 13 06:42:07 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 06:42:07 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 06:42:07 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 06:42:07 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 06:42:07 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 06:42:07 localhost kernel: Demotion targets for Node 0: null
Dec 13 06:42:07 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 06:42:07 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 13 06:42:07 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 13 06:42:07 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 06:42:07 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 06:42:07 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 06:42:07 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 13 06:42:07 localhost kernel: ACPI: Interpreter enabled
Dec 13 06:42:07 localhost kernel: ACPI: PM: (supports S0 S5)
Dec 13 06:42:07 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 06:42:07 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 06:42:07 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 06:42:07 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Dec 13 06:42:07 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 06:42:07 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 13 06:42:07 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Dec 13 06:42:07 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Dec 13 06:42:07 localhost kernel: PCI host bridge to bus 0000:00
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:02: extended config space not accessible
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [1] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [2] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [3] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [4] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [5] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [6] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [7] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [8] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [9] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [10] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [11] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [12] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [13] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [14] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [15] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [16] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [17] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [18] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [19] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [20] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [21] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [22] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [23] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [24] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [25] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [26] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [27] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [28] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [29] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [30] registered
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [31] registered
Dec 13 06:42:07 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-2] registered
Dec 13 06:42:07 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Dec 13 06:42:07 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-3] registered
Dec 13 06:42:07 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Dec 13 06:42:07 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-4] registered
Dec 13 06:42:07 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-5] registered
Dec 13 06:42:07 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Dec 13 06:42:07 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-6] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-7] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-8] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-9] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-10] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-11] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-12] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-13] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-14] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-15] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-16] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Dec 13 06:42:07 localhost kernel: acpiphp: Slot [0-17] registered
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Dec 13 06:42:07 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Dec 13 06:42:07 localhost kernel: iommu: Default domain type: Translated
Dec 13 06:42:07 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 06:42:07 localhost kernel: SCSI subsystem initialized
Dec 13 06:42:07 localhost kernel: ACPI: bus type USB registered
Dec 13 06:42:07 localhost kernel: usbcore: registered new interface driver usbfs
Dec 13 06:42:07 localhost kernel: usbcore: registered new interface driver hub
Dec 13 06:42:07 localhost kernel: usbcore: registered new device driver usb
Dec 13 06:42:07 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 06:42:07 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 06:42:07 localhost kernel: PTP clock support registered
Dec 13 06:42:07 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 13 06:42:07 localhost kernel: NetLabel: Initializing
Dec 13 06:42:07 localhost kernel: NetLabel:  domain hash size = 128
Dec 13 06:42:07 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 13 06:42:07 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 13 06:42:07 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 13 06:42:07 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 06:42:07 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 13 06:42:07 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Dec 13 06:42:07 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 06:42:07 localhost kernel: vgaarb: loaded
Dec 13 06:42:07 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 06:42:07 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 06:42:07 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 06:42:07 localhost kernel: pnp: PnP ACPI init
Dec 13 06:42:07 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Dec 13 06:42:07 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 13 06:42:07 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 06:42:07 localhost kernel: NET: Registered PF_INET protocol family
Dec 13 06:42:07 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 06:42:07 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 06:42:07 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 06:42:07 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 06:42:07 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 06:42:07 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 06:42:07 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 13 06:42:07 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 06:42:07 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 06:42:07 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 06:42:07 localhost kernel: NET: Registered PF_XDP protocol family
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Dec 13 06:42:07 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Dec 13 06:42:07 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Dec 13 06:42:07 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Dec 13 06:42:07 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 13 06:42:07 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 06:42:07 localhost kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Dec 13 06:42:07 localhost kernel: ACPI: bus type thunderbolt registered
Dec 13 06:42:07 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 13 06:42:07 localhost kernel: Initialise system trusted keyrings
Dec 13 06:42:07 localhost kernel: Key type blacklist registered
Dec 13 06:42:07 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 13 06:42:07 localhost kernel: zbud: loaded
Dec 13 06:42:07 localhost kernel: integrity: Platform Keyring initialized
Dec 13 06:42:07 localhost kernel: integrity: Machine keyring initialized
Dec 13 06:42:07 localhost kernel: Freeing initrd memory: 87820K
Dec 13 06:42:07 localhost kernel: NET: Registered PF_ALG protocol family
Dec 13 06:42:07 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 13 06:42:07 localhost kernel: Key type asymmetric registered
Dec 13 06:42:07 localhost kernel: Asymmetric key parser 'x509' registered
Dec 13 06:42:07 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 13 06:42:07 localhost kernel: io scheduler mq-deadline registered
Dec 13 06:42:07 localhost kernel: io scheduler kyber registered
Dec 13 06:42:07 localhost kernel: io scheduler bfq registered
Dec 13 06:42:07 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Dec 13 06:42:07 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Dec 13 06:42:07 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Dec 13 06:42:07 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Dec 13 06:42:07 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Dec 13 06:42:07 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Dec 13 06:42:07 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Dec 13 06:42:07 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 13 06:42:07 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 13 06:42:07 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 13 06:42:07 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Dec 13 06:42:07 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 06:42:07 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 06:42:07 localhost kernel: Non-volatile memory driver v1.3
Dec 13 06:42:07 localhost kernel: rdac: device handler registered
Dec 13 06:42:07 localhost kernel: hp_sw: device handler registered
Dec 13 06:42:07 localhost kernel: emc: device handler registered
Dec 13 06:42:07 localhost kernel: alua: device handler registered
Dec 13 06:42:07 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Dec 13 06:42:07 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Dec 13 06:42:07 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Dec 13 06:42:07 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Dec 13 06:42:07 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 13 06:42:07 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 13 06:42:07 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 13 06:42:07 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 13 06:42:07 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Dec 13 06:42:07 localhost kernel: hub 1-0:1.0: USB hub found
Dec 13 06:42:07 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 13 06:42:07 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 13 06:42:07 localhost kernel: usbserial: USB Serial support registered for generic
Dec 13 06:42:07 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 06:42:07 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 06:42:07 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 06:42:07 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 06:42:07 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 13 06:42:07 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Dec 13 06:42:07 localhost kernel: rtc_cmos 00:03: registered as rtc0
Dec 13 06:42:07 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-12-13T06:42:07 UTC (1765608127)
Dec 13 06:42:07 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Dec 13 06:42:07 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 13 06:42:07 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 06:42:07 localhost kernel: usbcore: registered new interface driver usbhid
Dec 13 06:42:07 localhost kernel: usbhid: USB HID core driver
Dec 13 06:42:07 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 13 06:42:07 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 13 06:42:07 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 13 06:42:07 localhost kernel: Initializing XFRM netlink socket
Dec 13 06:42:07 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 13 06:42:07 localhost kernel: Segment Routing with IPv6
Dec 13 06:42:07 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 13 06:42:07 localhost kernel: mpls_gso: MPLS GSO support
Dec 13 06:42:07 localhost kernel: IPI shorthand broadcast: enabled
Dec 13 06:42:07 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 06:42:07 localhost kernel: AES CTR mode by8 optimization enabled
Dec 13 06:42:07 localhost kernel: sched_clock: Marking stable (1121001876, 146024811)->(1370405560, -103378873)
Dec 13 06:42:07 localhost kernel: registered taskstats version 1
Dec 13 06:42:07 localhost kernel: Loading compiled-in X.509 certificates
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 13 06:42:07 localhost kernel: Demotion targets for Node 0: null
Dec 13 06:42:07 localhost kernel: page_owner is disabled
Dec 13 06:42:07 localhost kernel: Key type .fscrypt registered
Dec 13 06:42:07 localhost kernel: Key type fscrypt-provisioning registered
Dec 13 06:42:07 localhost kernel: Key type big_key registered
Dec 13 06:42:07 localhost kernel: Key type encrypted registered
Dec 13 06:42:07 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 06:42:07 localhost kernel: Loading compiled-in module X.509 certificates
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 06:42:07 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 13 06:42:07 localhost kernel: ima: No architecture policies found
Dec 13 06:42:07 localhost kernel: evm: Initialising EVM extended attributes:
Dec 13 06:42:07 localhost kernel: evm: security.selinux
Dec 13 06:42:07 localhost kernel: evm: security.SMACK64 (disabled)
Dec 13 06:42:07 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 13 06:42:07 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 13 06:42:07 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 13 06:42:07 localhost kernel: evm: security.apparmor (disabled)
Dec 13 06:42:07 localhost kernel: evm: security.ima
Dec 13 06:42:07 localhost kernel: evm: security.capability
Dec 13 06:42:07 localhost kernel: evm: HMAC attrs: 0x1
Dec 13 06:42:07 localhost kernel: Running certificate verification RSA selftest
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 13 06:42:07 localhost kernel: Running certificate verification ECDSA selftest
Dec 13 06:42:07 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 13 06:42:07 localhost kernel: clk: Disabling unused clocks
Dec 13 06:42:07 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 13 06:42:07 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 13 06:42:07 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 13 06:42:07 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 13 06:42:07 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 13 06:42:07 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 13 06:42:07 localhost kernel: Run /init as init process
Dec 13 06:42:07 localhost kernel:   with arguments:
Dec 13 06:42:07 localhost kernel:     /init
Dec 13 06:42:07 localhost kernel:   with environment:
Dec 13 06:42:07 localhost kernel:     HOME=/
Dec 13 06:42:07 localhost kernel:     TERM=linux
Dec 13 06:42:07 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 13 06:42:07 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 06:42:07 localhost systemd[1]: Detected virtualization kvm.
Dec 13 06:42:07 localhost systemd[1]: Detected architecture x86-64.
Dec 13 06:42:07 localhost systemd[1]: Running in initrd.
Dec 13 06:42:07 localhost systemd[1]: No hostname configured, using default hostname.
Dec 13 06:42:07 localhost systemd[1]: Hostname set to <localhost>.
Dec 13 06:42:07 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 13 06:42:07 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 13 06:42:07 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 06:42:07 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 13 06:42:07 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 13 06:42:07 localhost systemd[1]: Reached target Local File Systems.
Dec 13 06:42:07 localhost systemd[1]: Reached target Path Units.
Dec 13 06:42:07 localhost systemd[1]: Reached target Slice Units.
Dec 13 06:42:07 localhost systemd[1]: Reached target Swaps.
Dec 13 06:42:07 localhost systemd[1]: Reached target Timer Units.
Dec 13 06:42:07 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 06:42:07 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 13 06:42:07 localhost systemd[1]: Listening on Journal Socket.
Dec 13 06:42:07 localhost systemd[1]: Listening on udev Control Socket.
Dec 13 06:42:07 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 13 06:42:07 localhost systemd[1]: Reached target Socket Units.
Dec 13 06:42:07 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 13 06:42:07 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 13 06:42:07 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 13 06:42:07 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 13 06:42:07 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 13 06:42:07 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Dec 13 06:42:07 localhost systemd[1]: Starting Journal Service...
Dec 13 06:42:07 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 06:42:07 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 13 06:42:07 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 13 06:42:07 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Dec 13 06:42:07 localhost systemd[1]: Starting Create System Users...
Dec 13 06:42:07 localhost systemd[1]: Starting Setup Virtual Console...
Dec 13 06:42:07 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 06:42:07 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 13 06:42:07 localhost systemd[1]: Finished Create System Users.
Dec 13 06:42:07 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 06:42:07 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 06:42:07 localhost systemd-journald[280]: Journal started
Dec 13 06:42:07 localhost systemd-journald[280]: Runtime Journal (/run/log/journal/bdf0d7c05eef46ac89a1b1ab7cc430f1) is 8.0M, max 153.6M, 145.6M free.
Dec 13 06:42:07 localhost systemd-sysusers[283]: Creating group 'users' with GID 100.
Dec 13 06:42:07 localhost systemd-sysusers[283]: Creating group 'dbus' with GID 81.
Dec 13 06:42:07 localhost systemd-sysusers[283]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 13 06:42:07 localhost systemd[1]: Started Journal Service.
Dec 13 06:42:08 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 06:42:08 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 06:42:08 localhost systemd[1]: Finished Setup Virtual Console.
Dec 13 06:42:08 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 13 06:42:08 localhost systemd[1]: Starting dracut cmdline hook...
Dec 13 06:42:08 localhost dracut-cmdline[296]: dracut-9 dracut-057-102.git20250818.el9
Dec 13 06:42:08 localhost dracut-cmdline[296]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 06:42:08 localhost systemd[1]: Finished dracut cmdline hook.
Dec 13 06:42:08 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 13 06:42:08 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 06:42:08 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 13 06:42:08 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 13 06:42:08 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 13 06:42:08 localhost kernel: RPC: Registered udp transport module.
Dec 13 06:42:08 localhost kernel: RPC: Registered tcp transport module.
Dec 13 06:42:08 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 13 06:42:08 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 06:42:08 localhost rpc.statd[412]: Version 2.5.4 starting
Dec 13 06:42:08 localhost rpc.statd[412]: Initializing NSM state
Dec 13 06:42:08 localhost rpc.idmapd[417]: Setting log level to 0
Dec 13 06:42:08 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 13 06:42:08 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 06:42:08 localhost systemd-udevd[430]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 06:42:08 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 06:42:08 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 13 06:42:08 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 13 06:42:08 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 13 06:42:08 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 13 06:42:08 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 06:42:08 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 13 06:42:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 06:42:08 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 06:42:08 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 06:42:08 localhost systemd[1]: Reached target Network.
Dec 13 06:42:08 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 06:42:08 localhost systemd[1]: Starting dracut initqueue hook...
Dec 13 06:42:08 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Dec 13 06:42:08 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 13 06:42:08 localhost kernel:  vda: vda1
Dec 13 06:42:08 localhost kernel: libata version 3.00 loaded.
Dec 13 06:42:08 localhost systemd-udevd[457]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 06:42:08 localhost kernel: ahci 0000:00:1f.2: version 3.0
Dec 13 06:42:08 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Dec 13 06:42:08 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Dec 13 06:42:08 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Dec 13 06:42:08 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Dec 13 06:42:08 localhost kernel: scsi host0: ahci
Dec 13 06:42:08 localhost kernel: scsi host1: ahci
Dec 13 06:42:08 localhost kernel: scsi host2: ahci
Dec 13 06:42:08 localhost kernel: scsi host3: ahci
Dec 13 06:42:08 localhost kernel: scsi host4: ahci
Dec 13 06:42:08 localhost kernel: scsi host5: ahci
Dec 13 06:42:08 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Dec 13 06:42:08 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Dec 13 06:42:08 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Dec 13 06:42:08 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Dec 13 06:42:08 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Dec 13 06:42:08 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Dec 13 06:42:08 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 06:42:08 localhost systemd[1]: Reached target Initrd Root Device.
Dec 13 06:42:08 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 13 06:42:08 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 13 06:42:08 localhost systemd[1]: Reached target System Initialization.
Dec 13 06:42:08 localhost systemd[1]: Reached target Basic System.
Dec 13 06:42:08 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Dec 13 06:42:08 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Dec 13 06:42:08 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Dec 13 06:42:08 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Dec 13 06:42:08 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Dec 13 06:42:08 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Dec 13 06:42:08 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 13 06:42:08 localhost kernel: ata1.00: applying bridge limits
Dec 13 06:42:08 localhost kernel: ata1.00: configured for UDMA/100
Dec 13 06:42:08 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 13 06:42:08 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 13 06:42:08 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 13 06:42:08 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 06:42:08 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 13 06:42:09 localhost systemd[1]: Finished dracut initqueue hook.
Dec 13 06:42:09 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 06:42:09 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 13 06:42:09 localhost systemd[1]: Reached target Remote File Systems.
Dec 13 06:42:09 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 13 06:42:09 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 13 06:42:09 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 13 06:42:09 localhost systemd-fsck[524]: /usr/sbin/fsck.xfs: XFS file system.
Dec 13 06:42:09 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 06:42:09 localhost systemd[1]: Mounting /sysroot...
Dec 13 06:42:09 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 13 06:42:09 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 13 06:42:09 localhost kernel: XFS (vda1): Ending clean mount
Dec 13 06:42:09 localhost systemd[1]: Mounted /sysroot.
Dec 13 06:42:09 localhost systemd[1]: Reached target Initrd Root File System.
Dec 13 06:42:09 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 13 06:42:09 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 13 06:42:09 localhost systemd[1]: Reached target Initrd File Systems.
Dec 13 06:42:09 localhost systemd[1]: Reached target Initrd Default Target.
Dec 13 06:42:09 localhost systemd[1]: Starting dracut mount hook...
Dec 13 06:42:09 localhost systemd[1]: Finished dracut mount hook.
Dec 13 06:42:09 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 13 06:42:09 localhost rpc.idmapd[417]: exiting on signal 15
Dec 13 06:42:09 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 13 06:42:09 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 13 06:42:09 localhost systemd[1]: Stopped target Network.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Timer Units.
Dec 13 06:42:09 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 13 06:42:09 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Basic System.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Path Units.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Remote File Systems.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Slice Units.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Socket Units.
Dec 13 06:42:09 localhost systemd[1]: Stopped target System Initialization.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Local File Systems.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Swaps.
Dec 13 06:42:09 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut mount hook.
Dec 13 06:42:09 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 13 06:42:09 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 13 06:42:09 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 13 06:42:09 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 13 06:42:09 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 13 06:42:09 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 13 06:42:09 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 13 06:42:09 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 13 06:42:09 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 13 06:42:09 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 13 06:42:09 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 13 06:42:09 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 13 06:42:09 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Closed udev Control Socket.
Dec 13 06:42:09 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Closed udev Kernel Socket.
Dec 13 06:42:09 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 13 06:42:09 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 13 06:42:09 localhost systemd[1]: Starting Cleanup udev Database...
Dec 13 06:42:09 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 13 06:42:09 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 13 06:42:09 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Stopped Create System Users.
Dec 13 06:42:09 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 06:42:09 localhost systemd[1]: Finished Cleanup udev Database.
Dec 13 06:42:09 localhost systemd[1]: Reached target Switch Root.
Dec 13 06:42:09 localhost systemd[1]: Starting Switch Root...
Dec 13 06:42:09 localhost systemd[1]: Switching root.
Dec 13 06:42:09 localhost systemd-journald[280]: Received SIGTERM from PID 1 (systemd).
Dec 13 06:42:09 localhost systemd-journald[280]: Journal stopped
Dec 13 06:42:10 localhost kernel: audit: type=1404 audit(1765608129.782:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability open_perms=1
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 06:42:10 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 06:42:10 localhost kernel: audit: type=1403 audit(1765608129.899:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 06:42:10 localhost systemd[1]: Successfully loaded SELinux policy in 120.941ms.
Dec 13 06:42:10 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.238ms.
Dec 13 06:42:10 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 06:42:10 localhost systemd[1]: Detected virtualization kvm.
Dec 13 06:42:10 localhost systemd[1]: Detected architecture x86-64.
Dec 13 06:42:10 localhost systemd-rc-local-generator[609]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 06:42:10 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Stopped Switch Root.
Dec 13 06:42:10 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 06:42:10 localhost systemd[1]: Created slice Slice /system/getty.
Dec 13 06:42:10 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 13 06:42:10 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 13 06:42:10 localhost systemd[1]: Created slice User and Session Slice.
Dec 13 06:42:10 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 06:42:10 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 13 06:42:10 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 13 06:42:10 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 13 06:42:10 localhost systemd[1]: Stopped target Switch Root.
Dec 13 06:42:10 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 13 06:42:10 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 13 06:42:10 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 13 06:42:10 localhost systemd[1]: Reached target Path Units.
Dec 13 06:42:10 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 13 06:42:10 localhost systemd[1]: Reached target Slice Units.
Dec 13 06:42:10 localhost systemd[1]: Reached target Swaps.
Dec 13 06:42:10 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 13 06:42:10 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 13 06:42:10 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 13 06:42:10 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 13 06:42:10 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 13 06:42:10 localhost systemd[1]: Listening on udev Control Socket.
Dec 13 06:42:10 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 13 06:42:10 localhost systemd[1]: Mounting Huge Pages File System...
Dec 13 06:42:10 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 13 06:42:10 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 13 06:42:10 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 13 06:42:10 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 06:42:10 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 13 06:42:10 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 06:42:10 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 13 06:42:10 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 13 06:42:10 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 13 06:42:10 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 13 06:42:10 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 13 06:42:10 localhost systemd[1]: Stopped Journal Service.
Dec 13 06:42:10 localhost systemd[1]: Starting Journal Service...
Dec 13 06:42:10 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 06:42:10 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 13 06:42:10 localhost kernel: fuse: init (API version 7.37)
Dec 13 06:42:10 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 06:42:10 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 13 06:42:10 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 06:42:10 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 13 06:42:10 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 13 06:42:10 localhost systemd-journald[650]: Journal started
Dec 13 06:42:10 localhost systemd-journald[650]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 06:42:10 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 13 06:42:10 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 06:42:10 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 13 06:42:10 localhost systemd[1]: Started Journal Service.
Dec 13 06:42:10 localhost systemd[1]: Mounted Huge Pages File System.
Dec 13 06:42:10 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 13 06:42:10 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 13 06:42:10 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 13 06:42:10 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 06:42:10 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 06:42:10 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 13 06:42:10 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 13 06:42:10 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 13 06:42:10 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 13 06:42:10 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 13 06:42:10 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 13 06:42:10 localhost systemd[1]: Mounting FUSE Control File System...
Dec 13 06:42:10 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 06:42:10 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 13 06:42:10 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 13 06:42:10 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 06:42:10 localhost kernel: ACPI: bus type drm_connector registered
Dec 13 06:42:10 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 13 06:42:10 localhost systemd[1]: Starting Create System Users...
Dec 13 06:42:10 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 13 06:42:10 localhost systemd-journald[650]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 06:42:10 localhost systemd-journald[650]: Received client request to flush runtime journal.
Dec 13 06:42:10 localhost systemd[1]: Mounted FUSE Control File System.
Dec 13 06:42:10 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 13 06:42:10 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 06:42:10 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 13 06:42:10 localhost systemd[1]: Finished Create System Users.
Dec 13 06:42:10 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 06:42:10 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 13 06:42:10 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 06:42:10 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 13 06:42:10 localhost systemd[1]: Reached target Local File Systems.
Dec 13 06:42:10 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 13 06:42:10 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 13 06:42:10 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 06:42:10 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 13 06:42:10 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 13 06:42:10 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 13 06:42:10 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 06:42:10 localhost bootctl[667]: Couldn't find EFI system partition, skipping.
Dec 13 06:42:10 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 13 06:42:10 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 06:42:10 localhost systemd[1]: Starting Security Auditing Service...
Dec 13 06:42:10 localhost systemd[1]: Starting RPC Bind...
Dec 13 06:42:10 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 13 06:42:10 localhost auditd[673]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 13 06:42:10 localhost auditd[673]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 13 06:42:10 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 13 06:42:10 localhost systemd[1]: Started RPC Bind.
Dec 13 06:42:10 localhost augenrules[678]: /sbin/augenrules: No change
Dec 13 06:42:10 localhost augenrules[693]: No rules
Dec 13 06:42:10 localhost augenrules[693]: enabled 1
Dec 13 06:42:10 localhost augenrules[693]: failure 1
Dec 13 06:42:10 localhost augenrules[693]: pid 673
Dec 13 06:42:10 localhost augenrules[693]: rate_limit 0
Dec 13 06:42:10 localhost augenrules[693]: backlog_limit 8192
Dec 13 06:42:10 localhost augenrules[693]: lost 0
Dec 13 06:42:10 localhost augenrules[693]: backlog 0
Dec 13 06:42:10 localhost augenrules[693]: backlog_wait_time 60000
Dec 13 06:42:10 localhost augenrules[693]: backlog_wait_time_actual 0
Dec 13 06:42:10 localhost augenrules[693]: enabled 1
Dec 13 06:42:10 localhost augenrules[693]: failure 1
Dec 13 06:42:10 localhost augenrules[693]: pid 673
Dec 13 06:42:10 localhost augenrules[693]: rate_limit 0
Dec 13 06:42:10 localhost augenrules[693]: backlog_limit 8192
Dec 13 06:42:10 localhost augenrules[693]: lost 0
Dec 13 06:42:10 localhost augenrules[693]: backlog 0
Dec 13 06:42:10 localhost augenrules[693]: backlog_wait_time 60000
Dec 13 06:42:10 localhost augenrules[693]: backlog_wait_time_actual 0
Dec 13 06:42:10 localhost augenrules[693]: enabled 1
Dec 13 06:42:10 localhost augenrules[693]: failure 1
Dec 13 06:42:10 localhost augenrules[693]: pid 673
Dec 13 06:42:10 localhost augenrules[693]: rate_limit 0
Dec 13 06:42:10 localhost augenrules[693]: backlog_limit 8192
Dec 13 06:42:10 localhost augenrules[693]: lost 0
Dec 13 06:42:10 localhost augenrules[693]: backlog 0
Dec 13 06:42:10 localhost augenrules[693]: backlog_wait_time 60000
Dec 13 06:42:10 localhost augenrules[693]: backlog_wait_time_actual 0
Dec 13 06:42:10 localhost systemd[1]: Started Security Auditing Service.
Dec 13 06:42:10 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 13 06:42:10 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 13 06:42:10 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 13 06:42:10 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 13 06:42:10 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 06:42:10 localhost systemd[1]: Starting Update is Completed...
Dec 13 06:42:10 localhost systemd[1]: Finished Update is Completed.
Dec 13 06:42:10 localhost systemd-udevd[701]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 06:42:10 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 06:42:10 localhost systemd[1]: Reached target System Initialization.
Dec 13 06:42:10 localhost systemd[1]: Started dnf makecache --timer.
Dec 13 06:42:10 localhost systemd[1]: Started Daily rotation of log files.
Dec 13 06:42:10 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 13 06:42:10 localhost systemd[1]: Reached target Timer Units.
Dec 13 06:42:10 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 06:42:10 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 13 06:42:10 localhost systemd[1]: Reached target Socket Units.
Dec 13 06:42:10 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 13 06:42:10 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 06:42:10 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 13 06:42:10 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 06:42:10 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 06:42:10 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 06:42:10 localhost systemd-udevd[713]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 06:42:10 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 13 06:42:10 localhost systemd[1]: Reached target Basic System.
Dec 13 06:42:10 localhost dbus-broker-lau[727]: Ready
Dec 13 06:42:10 localhost systemd[1]: Starting NTP client/server...
Dec 13 06:42:10 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 13 06:42:10 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 13 06:42:10 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 13 06:42:10 localhost systemd[1]: Started irqbalance daemon.
Dec 13 06:42:10 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 13 06:42:10 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 06:42:10 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 06:42:10 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 06:42:10 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 13 06:42:10 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 13 06:42:10 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 13 06:42:10 localhost systemd[1]: Starting User Login Management...
Dec 13 06:42:10 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 13 06:42:10 localhost chronyd[754]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 06:42:10 localhost chronyd[754]: Loaded 0 symmetric keys
Dec 13 06:42:10 localhost chronyd[754]: Using right/UTC timezone to obtain leap second data
Dec 13 06:42:10 localhost chronyd[754]: Loaded seccomp filter (level 2)
Dec 13 06:42:10 localhost systemd[1]: Started NTP client/server.
Dec 13 06:42:10 localhost systemd-logind[745]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 06:42:10 localhost systemd-logind[745]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 06:42:10 localhost systemd-logind[745]: New seat seat0.
Dec 13 06:42:10 localhost systemd[1]: Started User Login Management.
Dec 13 06:42:10 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 13 06:42:10 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 13 06:42:10 localhost kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Dec 13 06:42:10 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Dec 13 06:42:10 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 13 06:42:11 localhost kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Dec 13 06:42:11 localhost kernel: iTCO_vendor_support: vendor-support=0
Dec 13 06:42:11 localhost kernel: Console: switching to colour dummy device 80x25
Dec 13 06:42:11 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 13 06:42:11 localhost kernel: [drm] features: -context_init
Dec 13 06:42:11 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Dec 13 06:42:11 localhost kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Dec 13 06:42:11 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 13 06:42:11 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 13 06:42:11 localhost kernel: [drm] number of scanouts: 1
Dec 13 06:42:11 localhost kernel: [drm] number of cap sets: 0
Dec 13 06:42:11 localhost iptables.init[739]: iptables: Applying firewall rules: [  OK  ]
Dec 13 06:42:11 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Dec 13 06:42:11 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 13 06:42:11 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 13 06:42:11 localhost kernel: Console: switching to colour frame buffer device 160x50
Dec 13 06:42:11 localhost kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 13 06:42:11 localhost kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Dec 13 06:42:11 localhost kernel: kvm_amd: TSC scaling supported
Dec 13 06:42:11 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 13 06:42:11 localhost kernel: kvm_amd: Nested Paging enabled
Dec 13 06:42:11 localhost kernel: kvm_amd: LBR virtualization supported
Dec 13 06:42:11 localhost kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Dec 13 06:42:11 localhost kernel: kvm_amd: Virtual GIF supported
Dec 13 06:42:11 localhost cloud-init[794]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 13 Dec 2025 06:42:11 +0000. Up 4.93 seconds.
Dec 13 06:42:11 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 13 06:42:11 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 13 06:42:11 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpzpl1w9jx.mount: Deactivated successfully.
Dec 13 06:42:11 localhost systemd[1]: Starting Hostname Service...
Dec 13 06:42:11 localhost systemd[1]: Started Hostname Service.
Dec 13 06:42:11 np0005558317 systemd-hostnamed[808]: Hostname set to <np0005558317> (static)
Dec 13 06:42:11 np0005558317 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 13 06:42:11 np0005558317 systemd[1]: Reached target Preparation for Network.
Dec 13 06:42:11 np0005558317 systemd[1]: Starting Network Manager...
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7334] NetworkManager (version 1.54.2-1.el9) is starting... (boot:7e7986d9-0598-4067-a630-6e2fad28fcbc)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7338] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7417] manager[0x55cb2471e000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7444] hostname: hostname: using hostnamed
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7444] hostname: static hostname changed from (none) to "np0005558317"
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7446] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7545] manager[0x55cb2471e000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7545] manager[0x55cb2471e000]: rfkill: WWAN hardware radio set enabled
Dec 13 06:42:11 np0005558317 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7589] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7589] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7590] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7590] manager: Networking is enabled by state file
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7591] settings: Loaded settings plugin: keyfile (internal)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7613] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7628] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7637] dhcp: init: Using DHCP client 'internal'
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7639] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7648] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7654] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7658] device (lo): Activation: starting connection 'lo' (08c9145e-912a-4b86-86a1-5730fa82ae86)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7665] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7666] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7686] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7688] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7689] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7690] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7691] device (eth0): carrier: link connected
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7692] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7696] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 06:42:11 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7701] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7704] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7704] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7706] manager: NetworkManager state is now CONNECTING
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7707] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7712] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7717] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:42:11 np0005558317 systemd[1]: Started Network Manager.
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7720] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Dec 13 06:42:11 np0005558317 systemd[1]: Reached target Network.
Dec 13 06:42:11 np0005558317 systemd[1]: Starting Network Manager Wait Online...
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7766] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7774] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 06:42:11 np0005558317 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 13 06:42:11 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7869] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7883] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 06:42:11 np0005558317 NetworkManager[812]: <info>  [1765608131.7888] device (lo): Activation: successful, device activated.
Dec 13 06:42:11 np0005558317 systemd[1]: Started GSSAPI Proxy Daemon.
Dec 13 06:42:11 np0005558317 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 06:42:11 np0005558317 systemd[1]: Reached target NFS client services.
Dec 13 06:42:11 np0005558317 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 06:42:11 np0005558317 systemd[1]: Reached target Remote File Systems.
Dec 13 06:42:11 np0005558317 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 06:42:12 np0005558317 NetworkManager[812]: <info>  [1765608132.8789] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:42:13 np0005558317 NetworkManager[812]: <info>  [1765608133.9691] dhcp6 (eth0): state changed new lease, address=2001:db8::1cf
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5679] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5710] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5712] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5715] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5718] device (eth0): Activation: successful, device activated.
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5723] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 06:42:15 np0005558317 NetworkManager[812]: <info>  [1765608135.5725] manager: startup complete
Dec 13 06:42:15 np0005558317 systemd[1]: Finished Network Manager Wait Online.
Dec 13 06:42:15 np0005558317 systemd[1]: Starting Cloud-init: Network Stage...
Dec 13 06:42:15 np0005558317 cloud-init[878]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 13 Dec 2025 06:42:15 +0000. Up 9.41 seconds.
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |  eth0  | True |        192.168.25.195       | 255.255.255.0 | global | fa:16:3e:b1:0c:2a |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |  eth0  | True |      2001:db8::1cf/128      |       .       | global | fa:16:3e:b1:0c:2a |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |  eth0  | True | fe80::f816:3eff:feb1:c2a/64 |       .       |  link  | fa:16:3e:b1:0c:2a |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   0   |     0.0.0.0     | 192.168.25.1 |     0.0.0.0     |    eth0   |   UG  |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   1   | 169.254.169.254 | 192.168.25.2 | 255.255.255.255 |    eth0   |  UGH  |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   2   |   192.168.25.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +-------+---------------+-------------+-----------+-------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +-------+---------------+-------------+-----------+-------+
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   2   | 2001:db8::1cf |      ::     |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Dec 13 06:42:15 np0005558317 cloud-init[878]: ci-info: +-------+---------------+-------------+-----------+-------+
Dec 13 06:42:16 np0005558317 useradd[945]: new group: name=cloud-user, GID=1001
Dec 13 06:42:16 np0005558317 useradd[945]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 13 06:42:16 np0005558317 useradd[945]: add 'cloud-user' to group 'adm'
Dec 13 06:42:16 np0005558317 useradd[945]: add 'cloud-user' to group 'systemd-journal'
Dec 13 06:42:16 np0005558317 useradd[945]: add 'cloud-user' to shadow group 'adm'
Dec 13 06:42:16 np0005558317 useradd[945]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 13 06:42:16 np0005558317 cloud-init[878]: Generating public/private rsa key pair.
Dec 13 06:42:16 np0005558317 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 13 06:42:16 np0005558317 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 13 06:42:16 np0005558317 cloud-init[878]: The key fingerprint is:
Dec 13 06:42:16 np0005558317 cloud-init[878]: SHA256:GSQgTqH8TLCYUmB1S+je0NSUnMG2hLMJ9cqychnvHnQ root@np0005558317
Dec 13 06:42:16 np0005558317 cloud-init[878]: The key's randomart image is:
Dec 13 06:42:16 np0005558317 cloud-init[878]: +---[RSA 3072]----+
Dec 13 06:42:16 np0005558317 cloud-init[878]: |.+*o+=Bo=        |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |+*o+o+o@         |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |=oo.+.*.o        |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |. +o.+.. o       |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |  .=o+ ES        |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |   .B..          |
Dec 13 06:42:16 np0005558317 cloud-init[878]: | . + o           |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |  o . .          |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |    .o           |
Dec 13 06:42:16 np0005558317 cloud-init[878]: +----[SHA256]-----+
Dec 13 06:42:16 np0005558317 cloud-init[878]: Generating public/private ecdsa key pair.
Dec 13 06:42:16 np0005558317 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 13 06:42:16 np0005558317 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 13 06:42:16 np0005558317 cloud-init[878]: The key fingerprint is:
Dec 13 06:42:16 np0005558317 cloud-init[878]: SHA256:ZhJNiM3whCkPRB3WQX/q5XpxOzDsefXdlDO7VjNeKk0 root@np0005558317
Dec 13 06:42:16 np0005558317 cloud-init[878]: The key's randomart image is:
Dec 13 06:42:16 np0005558317 cloud-init[878]: +---[ECDSA 256]---+
Dec 13 06:42:16 np0005558317 cloud-init[878]: | oo.+@+o.        |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |  o.=o=+         |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |   +  o o .      |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |    .  . o       |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |      . S..     .|
Dec 13 06:42:16 np0005558317 cloud-init[878]: |       = o= . E*+|
Dec 13 06:42:16 np0005558317 cloud-init[878]: |        ...* =.+@|
Dec 13 06:42:16 np0005558317 cloud-init[878]: |         .+ = o++|
Dec 13 06:42:16 np0005558317 cloud-init[878]: |        .. . o...|
Dec 13 06:42:16 np0005558317 cloud-init[878]: +----[SHA256]-----+
Dec 13 06:42:16 np0005558317 cloud-init[878]: Generating public/private ed25519 key pair.
Dec 13 06:42:16 np0005558317 cloud-init[878]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 13 06:42:16 np0005558317 cloud-init[878]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 13 06:42:16 np0005558317 cloud-init[878]: The key fingerprint is:
Dec 13 06:42:16 np0005558317 cloud-init[878]: SHA256:CGZApeQZ/l1aGDFrHyDPsKEwVnJrfSUR1oPeef/8Nyg root@np0005558317
Dec 13 06:42:16 np0005558317 cloud-init[878]: The key's randomart image is:
Dec 13 06:42:16 np0005558317 cloud-init[878]: +--[ED25519 256]--+
Dec 13 06:42:16 np0005558317 cloud-init[878]: |o+*+= =*=.       |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |.*o*oB.Boo       |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |  *o=.B.= o      |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |  .+ +.B + .     |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |    . + S . .    |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |             .   |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |              +  |
Dec 13 06:42:16 np0005558317 cloud-init[878]: |           E . +.|
Dec 13 06:42:16 np0005558317 cloud-init[878]: |            .   =|
Dec 13 06:42:16 np0005558317 cloud-init[878]: +----[SHA256]-----+
Dec 13 06:42:16 np0005558317 systemd[1]: Finished Cloud-init: Network Stage.
Dec 13 06:42:16 np0005558317 systemd[1]: Reached target Cloud-config availability.
Dec 13 06:42:16 np0005558317 systemd[1]: Reached target Network is Online.
Dec 13 06:42:16 np0005558317 systemd[1]: Starting Cloud-init: Config Stage...
Dec 13 06:42:16 np0005558317 systemd[1]: Starting Crash recovery kernel arming...
Dec 13 06:42:16 np0005558317 systemd[1]: Starting Notify NFS peers of a restart...
Dec 13 06:42:16 np0005558317 systemd[1]: Starting System Logging Service...
Dec 13 06:42:16 np0005558317 systemd[1]: Starting OpenSSH server daemon...
Dec 13 06:42:16 np0005558317 systemd[1]: Starting Permit User Sessions...
Dec 13 06:42:16 np0005558317 sm-notify[961]: Version 2.5.4 starting
Dec 13 06:42:16 np0005558317 sshd[963]: Server listening on 0.0.0.0 port 22.
Dec 13 06:42:16 np0005558317 sshd[963]: Server listening on :: port 22.
Dec 13 06:42:16 np0005558317 systemd[1]: Started OpenSSH server daemon.
Dec 13 06:42:16 np0005558317 systemd[1]: Started Notify NFS peers of a restart.
Dec 13 06:42:16 np0005558317 systemd[1]: Finished Permit User Sessions.
Dec 13 06:42:16 np0005558317 rsyslogd[962]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="962" x-info="https://www.rsyslog.com"] start
Dec 13 06:42:16 np0005558317 rsyslogd[962]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 13 06:42:16 np0005558317 crond[968]: (CRON) STARTUP (1.5.7)
Dec 13 06:42:16 np0005558317 crond[968]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 13 06:42:16 np0005558317 crond[968]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 59% if used.)
Dec 13 06:42:16 np0005558317 crond[968]: (CRON) INFO (running with inotify support)
Dec 13 06:42:16 np0005558317 systemd[1]: Started Command Scheduler.
Dec 13 06:42:16 np0005558317 sshd-session[970]: Unable to negotiate with 192.168.25.11 port 42846: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 13 06:42:16 np0005558317 systemd[1]: Started Getty on tty1.
Dec 13 06:42:16 np0005558317 systemd[1]: Started Serial Getty on ttyS0.
Dec 13 06:42:16 np0005558317 systemd[1]: Reached target Login Prompts.
Dec 13 06:42:16 np0005558317 systemd[1]: Started System Logging Service.
Dec 13 06:42:16 np0005558317 sshd-session[983]: Unable to negotiate with 192.168.25.11 port 42874: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 13 06:42:16 np0005558317 systemd[1]: Reached target Multi-User System.
Dec 13 06:42:16 np0005558317 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 13 06:42:17 np0005558317 sshd-session[988]: Unable to negotiate with 192.168.25.11 port 42888: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 13 06:42:17 np0005558317 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 06:42:17 np0005558317 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 13 06:42:17 np0005558317 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 06:42:17 np0005558317 sshd-session[1003]: Unable to negotiate with 192.168.25.11 port 42914: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 13 06:42:17 np0005558317 sshd-session[1014]: Unable to negotiate with 192.168.25.11 port 42928: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 13 06:42:17 np0005558317 sshd-session[966]: Connection closed by 192.168.25.11 port 42838 [preauth]
Dec 13 06:42:17 np0005558317 sshd-session[975]: Connection closed by 192.168.25.11 port 42862 [preauth]
Dec 13 06:42:17 np0005558317 kdumpctl[977]: kdump: No kdump initial ramdisk found.
Dec 13 06:42:17 np0005558317 kdumpctl[977]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 13 06:42:17 np0005558317 sshd-session[992]: Connection closed by 192.168.25.11 port 42892 [preauth]
Dec 13 06:42:17 np0005558317 sshd-session[996]: Connection closed by 192.168.25.11 port 42902 [preauth]
Dec 13 06:42:17 np0005558317 cloud-init[1112]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 13 Dec 2025 06:42:17 +0000. Up 10.77 seconds.
Dec 13 06:42:17 np0005558317 systemd[1]: Finished Cloud-init: Config Stage.
Dec 13 06:42:17 np0005558317 systemd[1]: Starting Cloud-init: Final Stage...
Dec 13 06:42:17 np0005558317 dracut[1240]: dracut-057-102.git20250818.el9
Dec 13 06:42:17 np0005558317 cloud-init[1258]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 13 Dec 2025 06:42:17 +0000. Up 11.10 seconds.
Dec 13 06:42:17 np0005558317 dracut[1242]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 13 06:42:17 np0005558317 cloud-init[1288]: #############################################################
Dec 13 06:42:17 np0005558317 cloud-init[1291]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 13 06:42:17 np0005558317 cloud-init[1300]: 256 SHA256:ZhJNiM3whCkPRB3WQX/q5XpxOzDsefXdlDO7VjNeKk0 root@np0005558317 (ECDSA)
Dec 13 06:42:17 np0005558317 cloud-init[1308]: 256 SHA256:CGZApeQZ/l1aGDFrHyDPsKEwVnJrfSUR1oPeef/8Nyg root@np0005558317 (ED25519)
Dec 13 06:42:17 np0005558317 cloud-init[1312]: 3072 SHA256:GSQgTqH8TLCYUmB1S+je0NSUnMG2hLMJ9cqychnvHnQ root@np0005558317 (RSA)
Dec 13 06:42:17 np0005558317 cloud-init[1316]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 13 06:42:17 np0005558317 cloud-init[1318]: #############################################################
Dec 13 06:42:17 np0005558317 cloud-init[1258]: Cloud-init v. 24.4-7.el9 finished at Sat, 13 Dec 2025 06:42:17 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.23 seconds
Dec 13 06:42:17 np0005558317 systemd[1]: Finished Cloud-init: Final Stage.
Dec 13 06:42:17 np0005558317 systemd[1]: Reached target Cloud-init target.
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 13 06:42:17 np0005558317 dracut[1242]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 13 06:42:17 np0005558317 dracut[1242]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 06:42:17 np0005558317 dracut[1242]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 06:42:18 np0005558317 chronyd[754]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Dec 13 06:42:18 np0005558317 chronyd[754]: System clock TAI offset set to 37 seconds
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: memstrack is not available
Dec 13 06:42:18 np0005558317 dracut[1242]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 06:42:18 np0005558317 dracut[1242]: memstrack is not available
Dec 13 06:42:18 np0005558317 dracut[1242]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 06:42:18 np0005558317 dracut[1242]: *** Including module: systemd ***
Dec 13 06:42:18 np0005558317 dracut[1242]: *** Including module: fips ***
Dec 13 06:42:18 np0005558317 dracut[1242]: *** Including module: systemd-initrd ***
Dec 13 06:42:18 np0005558317 dracut[1242]: *** Including module: i18n ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: drm ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: prefixdevname ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: kernel-modules ***
Dec 13 06:42:19 np0005558317 kernel: block vda: the capability attribute has been deprecated.
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: kernel-modules-extra ***
Dec 13 06:42:19 np0005558317 dracut[1242]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 13 06:42:19 np0005558317 dracut[1242]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 13 06:42:19 np0005558317 dracut[1242]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 13 06:42:19 np0005558317 dracut[1242]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: qemu ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: fstab-sys ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: rootfs-block ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: terminfo ***
Dec 13 06:42:19 np0005558317 dracut[1242]: *** Including module: udev-rules ***
Dec 13 06:42:20 np0005558317 dracut[1242]: Skipping udev rule: 91-permissions.rules
Dec 13 06:42:20 np0005558317 dracut[1242]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: virtiofs ***
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: dracut-systemd ***
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: usrmount ***
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: base ***
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: fs-lib ***
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: kdumpbase ***
Dec 13 06:42:20 np0005558317 dracut[1242]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 13 06:42:20 np0005558317 dracut[1242]:   microcode_ctl module: mangling fw_dir
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 13 06:42:20 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 13 06:42:21 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 13 06:42:21 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 13 06:42:21 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 13 06:42:21 np0005558317 dracut[1242]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 13 06:42:21 np0005558317 dracut[1242]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 13 06:42:21 np0005558317 dracut[1242]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Including module: openssl ***
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Including module: shutdown ***
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Including module: squash ***
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Including modules done ***
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Installing kernel module dependencies ***
Dec 13 06:42:21 np0005558317 irqbalance[740]: Cannot change IRQ 45 affinity: Operation not permitted
Dec 13 06:42:21 np0005558317 irqbalance[740]: IRQ 45 affinity is now unmanaged
Dec 13 06:42:21 np0005558317 irqbalance[740]: Cannot change IRQ 48 affinity: Operation not permitted
Dec 13 06:42:21 np0005558317 irqbalance[740]: IRQ 48 affinity is now unmanaged
Dec 13 06:42:21 np0005558317 irqbalance[740]: Cannot change IRQ 46 affinity: Operation not permitted
Dec 13 06:42:21 np0005558317 irqbalance[740]: IRQ 46 affinity is now unmanaged
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Installing kernel module dependencies done ***
Dec 13 06:42:21 np0005558317 dracut[1242]: *** Resolving executable dependencies ***
Dec 13 06:42:22 np0005558317 dracut[1242]: *** Resolving executable dependencies done ***
Dec 13 06:42:22 np0005558317 dracut[1242]: *** Generating early-microcode cpio image ***
Dec 13 06:42:22 np0005558317 dracut[1242]: *** Store current command line parameters ***
Dec 13 06:42:22 np0005558317 dracut[1242]: Stored kernel commandline:
Dec 13 06:42:22 np0005558317 dracut[1242]: No dracut internal kernel commandline stored in the initramfs
Dec 13 06:42:22 np0005558317 dracut[1242]: *** Install squash loader ***
Dec 13 06:42:23 np0005558317 dracut[1242]: *** Squashing the files inside the initramfs ***
Dec 13 06:42:25 np0005558317 dracut[1242]: *** Squashing the files inside the initramfs done ***
Dec 13 06:42:25 np0005558317 dracut[1242]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 13 06:42:25 np0005558317 dracut[1242]: *** Hardlinking files ***
Dec 13 06:42:25 np0005558317 dracut[1242]: Mode:           real
Dec 13 06:42:25 np0005558317 dracut[1242]: Files:          50
Dec 13 06:42:25 np0005558317 dracut[1242]: Linked:         0 files
Dec 13 06:42:25 np0005558317 dracut[1242]: Compared:       0 xattrs
Dec 13 06:42:25 np0005558317 dracut[1242]: Compared:       0 files
Dec 13 06:42:25 np0005558317 dracut[1242]: Saved:          0 B
Dec 13 06:42:25 np0005558317 dracut[1242]: Duration:       0.000367 seconds
Dec 13 06:42:25 np0005558317 dracut[1242]: *** Hardlinking files done ***
Dec 13 06:42:25 np0005558317 dracut[1242]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 13 06:42:25 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 06:42:25 np0005558317 kdumpctl[977]: kdump: kexec: loaded kdump kernel
Dec 13 06:42:25 np0005558317 kdumpctl[977]: kdump: Starting kdump: [OK]
Dec 13 06:42:25 np0005558317 systemd[1]: Finished Crash recovery kernel arming.
Dec 13 06:42:25 np0005558317 systemd[1]: Startup finished in 1.353s (kernel) + 2.019s (initrd) + 15.962s (userspace) = 19.335s.
Dec 13 06:42:41 np0005558317 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 06:42:43 np0005558317 sshd-session[4369]: Accepted publickey for zuul from 192.168.25.12 port 58500 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 13 06:42:43 np0005558317 systemd-logind[745]: New session 1 of user zuul.
Dec 13 06:42:43 np0005558317 systemd[1]: Created slice User Slice of UID 1000.
Dec 13 06:42:43 np0005558317 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 13 06:42:43 np0005558317 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 13 06:42:43 np0005558317 systemd[1]: Starting User Manager for UID 1000...
Dec 13 06:42:43 np0005558317 systemd[4373]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 06:42:43 np0005558317 systemd[4373]: Queued start job for default target Main User Target.
Dec 13 06:42:43 np0005558317 systemd[4373]: Created slice User Application Slice.
Dec 13 06:42:43 np0005558317 systemd[4373]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 06:42:43 np0005558317 systemd[4373]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 06:42:43 np0005558317 systemd[4373]: Reached target Paths.
Dec 13 06:42:43 np0005558317 systemd[4373]: Reached target Timers.
Dec 13 06:42:43 np0005558317 systemd[4373]: Starting D-Bus User Message Bus Socket...
Dec 13 06:42:43 np0005558317 systemd[4373]: Starting Create User's Volatile Files and Directories...
Dec 13 06:42:43 np0005558317 systemd[4373]: Listening on D-Bus User Message Bus Socket.
Dec 13 06:42:43 np0005558317 systemd[4373]: Reached target Sockets.
Dec 13 06:42:43 np0005558317 systemd[4373]: Finished Create User's Volatile Files and Directories.
Dec 13 06:42:43 np0005558317 systemd[4373]: Reached target Basic System.
Dec 13 06:42:43 np0005558317 systemd[4373]: Reached target Main User Target.
Dec 13 06:42:43 np0005558317 systemd[4373]: Startup finished in 88ms.
Dec 13 06:42:43 np0005558317 systemd[1]: Started User Manager for UID 1000.
Dec 13 06:42:43 np0005558317 systemd[1]: Started Session 1 of User zuul.
Dec 13 06:42:43 np0005558317 sshd-session[4369]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 06:42:44 np0005558317 python3[4455]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 06:42:46 np0005558317 python3[4483]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 06:42:50 np0005558317 python3[4537]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 06:42:51 np0005558317 python3[4577]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 13 06:42:53 np0005558317 python3[4603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcfbFj32J6mpPMis8/nxcdxedFPWsAb48sQnk8/dqmA/0o7eojJNPvtwlioIjWQr/DJB6HjDPSB3NLuJPBSZnrNXlU85vHSy9U6+5lLAz6HZ28xMECtQqQv/iD4tkL7SwrX2dXIu5oOW+FtK2qFeV1Qkujl03B8H3B2uRtWqYL8/zyGhuKhpFnInzOT5JMJr/i5U3Q4mfai5xLM9Fx3245zOHWxY295NK9jkUWvOMnb9O6dcaPGBLsCrJVWkSIWQpHzO5mE+f3YYj4lohS2jaem9HJVWEs+lF7F+b1Eqcid6hw3yrM5FfemVQsE1x5kXbDueDke70soZK8MZDhM8hiX/3OY0csL75CZUeA0+Prard1EJKM0jZjvGkLPtA4/nsPY6CWE69HYvq4xsy8d0tGTHgIu//S8U/e0kkJZrqBCly1yR7a2GJdBckdXwHXdHr8vWYn3GkMhs5exnehoz4V/SMbrIaTHn4dTNqxxeoF7rmzY8Or/Sgprsq8anQjurs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:42:53 np0005558317 python3[4627]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:53 np0005558317 python3[4726]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:42:53 np0005558317 python3[4797]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608173.5330253-207-56333510274703/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6d9437e7328943549472e8c958968345_id_rsa follow=False checksum=5f8b4b192d062490ef0af4c95496c5c1c0b5b3d0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:54 np0005558317 python3[4920]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:42:54 np0005558317 python3[4991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608174.1583068-240-108821366010240/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6d9437e7328943549472e8c958968345_id_rsa.pub follow=False checksum=8776d64955eefa7798ebe25237f59e8043a353da backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:55 np0005558317 python3[5039]: ansible-ping Invoked with data=pong
Dec 13 06:42:56 np0005558317 python3[5063]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 06:42:57 np0005558317 python3[5117]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 13 06:42:58 np0005558317 python3[5149]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:58 np0005558317 python3[5173]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:59 np0005558317 python3[5197]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:59 np0005558317 python3[5221]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:59 np0005558317 python3[5245]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:42:59 np0005558317 python3[5269]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:01 np0005558317 sudo[5293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrdpjjspcjetzynauqcjeidoaxjukxz ; /usr/bin/python3'
Dec 13 06:43:01 np0005558317 sudo[5293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:01 np0005558317 python3[5295]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:01 np0005558317 sudo[5293]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:01 np0005558317 irqbalance[740]: Cannot change IRQ 47 affinity: Operation not permitted
Dec 13 06:43:01 np0005558317 irqbalance[740]: IRQ 47 affinity is now unmanaged
Dec 13 06:43:01 np0005558317 sudo[5371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhiqjhscfhvyabonvbtxdfwbvpuwyfgh ; /usr/bin/python3'
Dec 13 06:43:01 np0005558317 sudo[5371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:01 np0005558317 python3[5373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:43:01 np0005558317 sudo[5371]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:01 np0005558317 sudo[5444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkntldbpcgqxcvorhqlrzdmiidbgbflu ; /usr/bin/python3'
Dec 13 06:43:01 np0005558317 sudo[5444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:01 np0005558317 python3[5446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608181.30276-21-226408615847484/source follow=False _original_basename=mirror_info.sh.j2 checksum=8d04605e615eb785450b583fc5efd2437794600d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:01 np0005558317 sudo[5444]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:02 np0005558317 python3[5494]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:02 np0005558317 python3[5518]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:02 np0005558317 python3[5542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:03 np0005558317 python3[5566]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:03 np0005558317 python3[5590]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:03 np0005558317 python3[5614]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:03 np0005558317 python3[5638]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:03 np0005558317 python3[5662]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:04 np0005558317 python3[5686]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:04 np0005558317 python3[5710]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:04 np0005558317 python3[5734]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:04 np0005558317 python3[5758]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:04 np0005558317 python3[5782]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:05 np0005558317 python3[5806]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:05 np0005558317 python3[5830]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:05 np0005558317 python3[5854]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:05 np0005558317 python3[5878]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:06 np0005558317 python3[5902]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:06 np0005558317 python3[5926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:06 np0005558317 python3[5950]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:06 np0005558317 python3[5974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:06 np0005558317 python3[5998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:07 np0005558317 python3[6022]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:07 np0005558317 python3[6046]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:07 np0005558317 python3[6070]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:07 np0005558317 python3[6094]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:43:10 np0005558317 sudo[6118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-molsqqvjtmbvjvrpeiwcdvxewhjtnlae ; /usr/bin/python3'
Dec 13 06:43:10 np0005558317 sudo[6118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:10 np0005558317 python3[6120]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 06:43:10 np0005558317 systemd[1]: Starting Time & Date Service...
Dec 13 06:43:10 np0005558317 systemd[1]: Started Time & Date Service.
Dec 13 06:43:10 np0005558317 systemd-timedated[6122]: Changed time zone to 'UTC' (UTC).
Dec 13 06:43:10 np0005558317 sudo[6118]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:10 np0005558317 sudo[6149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyqnqidsnvcynbhksmogqdxvtsrebelr ; /usr/bin/python3'
Dec 13 06:43:10 np0005558317 sudo[6149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:10 np0005558317 python3[6151]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:10 np0005558317 sudo[6149]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:10 np0005558317 python3[6227]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:43:11 np0005558317 python3[6298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765608190.6503088-153-113752083553829/source _original_basename=tmp9jg7fw28 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:11 np0005558317 python3[6398]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:43:11 np0005558317 python3[6469]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765608191.2642102-183-59103226582882/source _original_basename=tmpt0s8wv1w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:12 np0005558317 sudo[6569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounryidkqftvxxpnownwxpuaclltwxka ; /usr/bin/python3'
Dec 13 06:43:12 np0005558317 sudo[6569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:12 np0005558317 python3[6571]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:43:12 np0005558317 sudo[6569]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:12 np0005558317 sudo[6642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkbbgrgsdupwfqowinzbuheavzpedzl ; /usr/bin/python3'
Dec 13 06:43:12 np0005558317 sudo[6642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:12 np0005558317 python3[6644]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765608192.0678525-231-72806642869882/source _original_basename=tmpkp90t9ad follow=False checksum=b24b3e02803cc66aa95d87527d84945a4821d184 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:12 np0005558317 sudo[6642]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:12 np0005558317 python3[6692]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:43:13 np0005558317 python3[6718]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:43:13 np0005558317 sudo[6796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdarsrxtpluibbqiibsvscldbfsuaqqi ; /usr/bin/python3'
Dec 13 06:43:13 np0005558317 sudo[6796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:13 np0005558317 python3[6798]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:43:13 np0005558317 sudo[6796]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:13 np0005558317 sudo[6869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtltebtbghyfcftehwnucdirmaejvkew ; /usr/bin/python3'
Dec 13 06:43:13 np0005558317 sudo[6869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:13 np0005558317 python3[6871]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608193.2384923-273-263645010279144/source _original_basename=tmp2ycxx3ex follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:13 np0005558317 sudo[6869]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:13 np0005558317 sudo[6920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsqsvacpplxkcrbsupbooppqkvxrxeh ; /usr/bin/python3'
Dec 13 06:43:13 np0005558317 sudo[6920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:14 np0005558317 python3[6922]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e6f-3cad-0473-33bc-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:43:14 np0005558317 sudo[6920]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:14 np0005558317 python3[6950]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                             _uses_shell=True zuul_log_id=fa163e6f-3cad-0473-33bc-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 13 06:43:15 np0005558317 python3[6978]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:39 np0005558317 sudo[7002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knrkglsbevrgoxzovnhuzerobjzwfymv ; /usr/bin/python3'
Dec 13 06:43:39 np0005558317 sudo[7002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:43:39 np0005558317 python3[7004]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:43:39 np0005558317 sudo[7002]: pam_unix(sudo:session): session closed for user root
Dec 13 06:43:40 np0005558317 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Dec 13 06:44:02 np0005558317 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Dec 13 06:44:02 np0005558317 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0012] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 06:44:03 np0005558317 systemd-udevd[7007]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0291] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0318] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0323] device (eth1): carrier: link connected
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0326] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0331] policy: auto-activating connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b)
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0335] device (eth1): Activation: starting connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b)
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0337] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0339] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0344] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 06:44:03 np0005558317 NetworkManager[812]: <info>  [1765608243.0349] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:03 np0005558317 python3[7034]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e6f-3cad-395d-e36d-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:44:13 np0005558317 sudo[7112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqceqapqthucpommesvsvwqodhjhsqz ; OS_CLOUD=ibm-bm3-nodepool /usr/bin/python3'
Dec 13 06:44:13 np0005558317 sudo[7112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:44:13 np0005558317 python3[7114]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:44:13 np0005558317 sudo[7112]: pam_unix(sudo:session): session closed for user root
Dec 13 06:44:13 np0005558317 sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cloxgcftjscokiwdapugcfijcldpwcbj ; OS_CLOUD=ibm-bm3-nodepool /usr/bin/python3'
Dec 13 06:44:13 np0005558317 sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:44:13 np0005558317 python3[7187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608253.0308301-111-65194899838960/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7f90c6f6ba2ee4a63d653631ac68d02d4cb966d3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:44:13 np0005558317 sudo[7185]: pam_unix(sudo:session): session closed for user root
Dec 13 06:44:13 np0005558317 sudo[7235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnndwfgiyphohbthttlfoubdrvisppwi ; OS_CLOUD=ibm-bm3-nodepool /usr/bin/python3'
Dec 13 06:44:13 np0005558317 sudo[7235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:44:14 np0005558317 python3[7237]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 06:44:14 np0005558317 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 06:44:14 np0005558317 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 06:44:14 np0005558317 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2068] caught SIGTERM, shutting down normally.
Dec 13 06:44:14 np0005558317 systemd[1]: Stopping Network Manager...
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2075] dhcp4 (eth0): canceled DHCP transaction
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2075] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2075] dhcp4 (eth0): state changed no lease
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2076] dhcp6 (eth0): canceled DHCP transaction
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2076] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2076] dhcp6 (eth0): state changed no lease
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2078] manager: NetworkManager state is now CONNECTING
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2216] dhcp4 (eth1): canceled DHCP transaction
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2216] dhcp4 (eth1): state changed no lease
Dec 13 06:44:14 np0005558317 NetworkManager[812]: <info>  [1765608254.2236] exiting (success)
Dec 13 06:44:14 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 06:44:14 np0005558317 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 06:44:14 np0005558317 systemd[1]: Stopped Network Manager.
Dec 13 06:44:14 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 06:44:14 np0005558317 systemd[1]: Starting Network Manager...
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.2586] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:7e7986d9-0598-4067-a630-6e2fad28fcbc)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.2588] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.2634] manager[0x555bfff26000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 06:44:14 np0005558317 systemd[1]: Starting Hostname Service...
Dec 13 06:44:14 np0005558317 systemd[1]: Started Hostname Service.
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3221] hostname: hostname: using hostnamed
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3222] hostname: static hostname changed from (none) to "np0005558317"
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3226] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3231] manager[0x555bfff26000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3232] manager[0x555bfff26000]: rfkill: WWAN hardware radio set enabled
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3257] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3258] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3259] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3260] manager: Networking is enabled by state file
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3262] settings: Loaded settings plugin: keyfile (internal)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3266] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3284] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3294] dhcp: init: Using DHCP client 'internal'
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3297] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3302] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3307] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3314] device (lo): Activation: starting connection 'lo' (08c9145e-912a-4b86-86a1-5730fa82ae86)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3321] device (eth0): carrier: link connected
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3325] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3330] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3331] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3336] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3341] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3347] device (eth1): carrier: link connected
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3351] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3354] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b) (indicated)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3355] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3360] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3367] device (eth1): Activation: starting connection 'Wired connection 1' (4a989926-6152-3dd8-8a07-a3472614de2b)
Dec 13 06:44:14 np0005558317 systemd[1]: Started Network Manager.
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3373] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3377] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3378] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3379] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3381] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3383] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3385] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3388] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3390] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3397] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3400] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3403] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3408] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3413] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3419] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3428] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3437] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 06:44:14 np0005558317 systemd[1]: Starting Network Manager Wait Online...
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3463] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3467] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 06:44:14 np0005558317 NetworkManager[7245]: <info>  [1765608254.3470] device (lo): Activation: successful, device activated.
Dec 13 06:44:14 np0005558317 sudo[7235]: pam_unix(sudo:session): session closed for user root
Dec 13 06:44:14 np0005558317 python3[7309]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e6f-3cad-395d-e36d-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3893] dhcp6 (eth0): state changed new lease, address=2001:db8::1cf
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3903] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3935] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3936] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3939] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3940] device (eth0): Activation: successful, device activated.
Dec 13 06:44:15 np0005558317 NetworkManager[7245]: <info>  [1765608255.3944] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 06:44:25 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 06:44:44 np0005558317 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 06:44:54 np0005558317 sudo[7408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxeumjcdipeayobboitwkalpyxueooxr ; OS_CLOUD=ibm-bm3-nodepool /usr/bin/python3'
Dec 13 06:44:54 np0005558317 sudo[7408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:44:54 np0005558317 python3[7410]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:44:54 np0005558317 sudo[7408]: pam_unix(sudo:session): session closed for user root
Dec 13 06:44:54 np0005558317 sudo[7481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iabpmjemgedlddopmoanszlarqqzdbtf ; OS_CLOUD=ibm-bm3-nodepool /usr/bin/python3'
Dec 13 06:44:54 np0005558317 sudo[7481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:44:54 np0005558317 python3[7483]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608294.0741224-273-27766293396627/source _original_basename=tmpb2ss1gkr follow=False checksum=480db894146ef2cc1376d935191c022003cc0988 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:44:54 np0005558317 sudo[7481]: pam_unix(sudo:session): session closed for user root
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4103] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 06:44:59 np0005558317 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 06:44:59 np0005558317 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4292] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4295] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4301] device (eth1): Activation: successful, device activated.
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4306] manager: startup complete
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4308] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <warn>  [1765608299.4315] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4321] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 systemd[1]: Finished Network Manager Wait Online.
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4425] dhcp4 (eth1): canceled DHCP transaction
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4425] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4425] dhcp4 (eth1): state changed no lease
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4436] policy: auto-activating connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4439] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4440] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4442] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4449] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4456] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4483] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4485] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 06:44:59 np0005558317 NetworkManager[7245]: <info>  [1765608299.4488] device (eth1): Activation: successful, device activated.
Dec 13 06:45:09 np0005558317 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 06:45:37 np0005558317 systemd[4373]: Starting Mark boot as successful...
Dec 13 06:45:37 np0005558317 systemd[4373]: Finished Mark boot as successful.
Dec 13 06:45:54 np0005558317 sshd-session[4382]: Received disconnect from 192.168.25.12 port 58500:11: disconnected by user
Dec 13 06:45:54 np0005558317 sshd-session[4382]: Disconnected from user zuul 192.168.25.12 port 58500
Dec 13 06:45:54 np0005558317 sshd-session[4369]: pam_unix(sshd:session): session closed for user zuul
Dec 13 06:45:54 np0005558317 systemd-logind[745]: Session 1 logged out. Waiting for processes to exit.
Dec 13 06:48:37 np0005558317 systemd[4373]: Created slice User Background Tasks Slice.
Dec 13 06:48:37 np0005558317 systemd[4373]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 06:48:37 np0005558317 systemd[4373]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 06:50:12 np0005558317 sshd-session[7535]: Accepted publickey for zuul from 192.168.25.12 port 54546 ssh2: RSA SHA256:6D1WjYOFjoFBsumnInA3EGvtTfCaVlI9gahR8Wfk2Jc
Dec 13 06:50:12 np0005558317 systemd-logind[745]: New session 3 of user zuul.
Dec 13 06:50:12 np0005558317 systemd[1]: Started Session 3 of User zuul.
Dec 13 06:50:12 np0005558317 sshd-session[7535]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 06:50:12 np0005558317 sudo[7562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxjfiibhtyrcuhavjjwbcfnnhxzunqhl ; /usr/bin/python3'
Dec 13 06:50:12 np0005558317 sudo[7562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:12 np0005558317 python3[7564]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                             _uses_shell=True zuul_log_id=fa163e6f-3cad-84d4-1fbf-000000001f5b-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:50:12 np0005558317 sudo[7562]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:12 np0005558317 sudo[7591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykspfihtrjgitnidpsjyitfgflmzwyo ; /usr/bin/python3'
Dec 13 06:50:12 np0005558317 sudo[7591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:12 np0005558317 python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:50:12 np0005558317 sudo[7591]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:12 np0005558317 sudo[7617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhewsyjwfivyhnkqlvgrdcrdrdkcwqb ; /usr/bin/python3'
Dec 13 06:50:12 np0005558317 sudo[7617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:13 np0005558317 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:50:13 np0005558317 sudo[7617]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:13 np0005558317 sudo[7643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxxzdqrgcrdckqqgclncldhiqfbrava ; /usr/bin/python3'
Dec 13 06:50:13 np0005558317 sudo[7643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:13 np0005558317 python3[7645]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:50:13 np0005558317 sudo[7643]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:13 np0005558317 sudo[7669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawbswyjfkpsxpujfqssmjiggbhhdszw ; /usr/bin/python3'
Dec 13 06:50:13 np0005558317 sudo[7669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:13 np0005558317 python3[7671]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:50:13 np0005558317 sudo[7669]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:13 np0005558317 sudo[7695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tetqeprjpvnvxcdwbuzgavfelbhqcjnl ; /usr/bin/python3'
Dec 13 06:50:13 np0005558317 sudo[7695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:13 np0005558317 python3[7697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:50:13 np0005558317 sudo[7695]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:14 np0005558317 sudo[7773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkhmbibtvuaktyxcfiqvbxxurpsiiool ; /usr/bin/python3'
Dec 13 06:50:14 np0005558317 sudo[7773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:14 np0005558317 python3[7775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:50:14 np0005558317 sudo[7773]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:14 np0005558317 sudo[7846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pubctwofjwgkvomvktgizmfqbnrhyins ; /usr/bin/python3'
Dec 13 06:50:14 np0005558317 sudo[7846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:14 np0005558317 python3[7848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608614.028469-475-277155668861196/source _original_basename=tmp1f8w1sqh follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:50:14 np0005558317 sudo[7846]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:14 np0005558317 sudo[7896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saoxubsahwisbpnuqbsayffinrippfwu ; /usr/bin/python3'
Dec 13 06:50:14 np0005558317 sudo[7896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:15 np0005558317 python3[7898]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 06:50:15 np0005558317 systemd[1]: Reloading.
Dec 13 06:50:15 np0005558317 systemd-rc-local-generator[7917]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 06:50:15 np0005558317 sudo[7896]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:16 np0005558317 sudo[7951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhrzckowzukphfqzpivlwawilpmohogc ; /usr/bin/python3'
Dec 13 06:50:16 np0005558317 sudo[7951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:16 np0005558317 python3[7953]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 13 06:50:16 np0005558317 sudo[7951]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:16 np0005558317 sudo[7977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkwyyjdvycwmykmxdrkwvhrdlqxyxht ; /usr/bin/python3'
Dec 13 06:50:16 np0005558317 sudo[7977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:16 np0005558317 python3[7979]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:50:16 np0005558317 sudo[7977]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:16 np0005558317 sudo[8005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysifgffxahsttymdsxltfmzdrzccjsww ; /usr/bin/python3'
Dec 13 06:50:16 np0005558317 sudo[8005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:16 np0005558317 python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:50:16 np0005558317 sudo[8005]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:17 np0005558317 sudo[8033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqywyraosvugaoauavovnhckbkmoccbe ; /usr/bin/python3'
Dec 13 06:50:17 np0005558317 sudo[8033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:17 np0005558317 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:50:17 np0005558317 sudo[8033]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:17 np0005558317 sudo[8061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzvwzzlhusrxumqaquorecakxjbwglse ; /usr/bin/python3'
Dec 13 06:50:17 np0005558317 sudo[8061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:17 np0005558317 python3[8063]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                             _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:50:17 np0005558317 sudo[8061]: pam_unix(sudo:session): session closed for user root
Dec 13 06:50:17 np0005558317 python3[8090]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                             _uses_shell=True zuul_log_id=fa163e6f-3cad-84d4-1fbf-000000001f62-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:50:18 np0005558317 python3[8120]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 06:50:20 np0005558317 sshd-session[7538]: Connection closed by 192.168.25.12 port 54546
Dec 13 06:50:20 np0005558317 sshd-session[7535]: pam_unix(sshd:session): session closed for user zuul
Dec 13 06:50:20 np0005558317 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 06:50:20 np0005558317 systemd[1]: session-3.scope: Consumed 3.129s CPU time.
Dec 13 06:50:20 np0005558317 systemd-logind[745]: Session 3 logged out. Waiting for processes to exit.
Dec 13 06:50:20 np0005558317 systemd-logind[745]: Removed session 3.
Dec 13 06:50:22 np0005558317 sshd-session[8125]: Accepted publickey for zuul from 192.168.25.12 port 49490 ssh2: RSA SHA256:6D1WjYOFjoFBsumnInA3EGvtTfCaVlI9gahR8Wfk2Jc
Dec 13 06:50:22 np0005558317 systemd-logind[745]: New session 4 of user zuul.
Dec 13 06:50:22 np0005558317 systemd[1]: Started Session 4 of User zuul.
Dec 13 06:50:22 np0005558317 sshd-session[8125]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 06:50:22 np0005558317 sudo[8152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozgpetdmhsmooelszommqgnffveufjrd ; /usr/bin/python3'
Dec 13 06:50:22 np0005558317 sudo[8152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:50:22 np0005558317 python3[8154]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 06:50:36 np0005558317 kernel: SELinux:  Converting 386 SID table entries...
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 06:50:36 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 06:50:43 np0005558317 kernel: SELinux:  Converting 386 SID table entries...
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 06:50:43 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 06:50:50 np0005558317 kernel: SELinux:  Converting 386 SID table entries...
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 06:50:50 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 06:50:51 np0005558317 setsebool[8222]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 13 06:50:51 np0005558317 setsebool[8222]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 13 06:51:00 np0005558317 kernel: SELinux:  Converting 389 SID table entries...
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability open_perms=1
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability always_check_network=0
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 06:51:00 np0005558317 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 06:51:12 np0005558317 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 06:51:12 np0005558317 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 06:51:12 np0005558317 systemd[1]: Starting man-db-cache-update.service...
Dec 13 06:51:12 np0005558317 systemd[1]: Reloading.
Dec 13 06:51:12 np0005558317 systemd-rc-local-generator[8969]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 06:51:12 np0005558317 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 06:51:13 np0005558317 sudo[8152]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:15 np0005558317 python3[13544]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                              _uses_shell=True zuul_log_id=fa163e6f-3cad-55e7-0116-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:51:16 np0005558317 kernel: evm: overlay not supported
Dec 13 06:51:16 np0005558317 systemd[4373]: Starting D-Bus User Message Bus...
Dec 13 06:51:16 np0005558317 dbus-broker-launch[14044]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 13 06:51:16 np0005558317 dbus-broker-launch[14044]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 13 06:51:16 np0005558317 systemd[4373]: Started D-Bus User Message Bus.
Dec 13 06:51:16 np0005558317 dbus-broker-lau[14044]: Ready
Dec 13 06:51:16 np0005558317 systemd[4373]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 06:51:16 np0005558317 systemd[4373]: Created slice Slice /user.
Dec 13 06:51:16 np0005558317 systemd[4373]: podman-14022.scope: unit configures an IP firewall, but not running as root.
Dec 13 06:51:16 np0005558317 systemd[4373]: (This warning is only shown for the first unit using IP firewalling.)
Dec 13 06:51:16 np0005558317 systemd[4373]: Started podman-14022.scope.
Dec 13 06:51:17 np0005558317 systemd[4373]: Started podman-pause-1c7804ac.scope.
Dec 13 06:51:17 np0005558317 sudo[14930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgitsezwuvhecuscgwutuycnlszbeixp ; /usr/bin/python3'
Dec 13 06:51:17 np0005558317 sudo[14930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:17 np0005558317 python3[14945]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                             location = "38.129.56.153:5001"
                                             insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                             location = "38.129.56.153:5001"
                                             insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:51:17 np0005558317 python3[14945]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 13 06:51:17 np0005558317 sudo[14930]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:18 np0005558317 sshd-session[8128]: Connection closed by 192.168.25.12 port 49490
Dec 13 06:51:18 np0005558317 sshd-session[8125]: pam_unix(sshd:session): session closed for user zuul
Dec 13 06:51:18 np0005558317 systemd-logind[745]: Session 4 logged out. Waiting for processes to exit.
Dec 13 06:51:18 np0005558317 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 06:51:18 np0005558317 systemd[1]: session-4.scope: Consumed 45.962s CPU time.
Dec 13 06:51:18 np0005558317 systemd-logind[745]: Removed session 4.
Dec 13 06:51:34 np0005558317 sshd-session[28626]: Connection closed by 192.168.25.167 port 42900 [preauth]
Dec 13 06:51:34 np0005558317 sshd-session[28631]: Connection closed by 192.168.25.167 port 42916 [preauth]
Dec 13 06:51:34 np0005558317 sshd-session[28632]: Unable to negotiate with 192.168.25.167 port 42924: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 13 06:51:34 np0005558317 sshd-session[28633]: Unable to negotiate with 192.168.25.167 port 42928: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 13 06:51:34 np0005558317 sshd-session[28628]: Unable to negotiate with 192.168.25.167 port 42934: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 13 06:51:35 np0005558317 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 06:51:35 np0005558317 systemd[1]: Finished man-db-cache-update.service.
Dec 13 06:51:35 np0005558317 systemd[1]: man-db-cache-update.service: Consumed 27.872s CPU time.
Dec 13 06:51:35 np0005558317 systemd[1]: run-r0cd49d91c2404a5483104891b3245523.service: Deactivated successfully.
Dec 13 06:51:42 np0005558317 sshd-session[29642]: Accepted publickey for zuul from 192.168.25.12 port 54758 ssh2: RSA SHA256:6D1WjYOFjoFBsumnInA3EGvtTfCaVlI9gahR8Wfk2Jc
Dec 13 06:51:42 np0005558317 systemd-logind[745]: New session 5 of user zuul.
Dec 13 06:51:42 np0005558317 systemd[1]: Started Session 5 of User zuul.
Dec 13 06:51:42 np0005558317 sshd-session[29642]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 06:51:42 np0005558317 python3[29669]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnHqEnifdMogqe2koi3kf/8MOGYg7doIt/u/Zi+s0YfOOikjBYd243liAj5ighEVQqRN5syYt1lhjz20ZhMgK4= zuul@np0005558316
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:51:43 np0005558317 sudo[29693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trxybyxhlqfrwytvyxviofavekdtwhta ; /usr/bin/python3'
Dec 13 06:51:43 np0005558317 sudo[29693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:43 np0005558317 python3[29695]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnHqEnifdMogqe2koi3kf/8MOGYg7doIt/u/Zi+s0YfOOikjBYd243liAj5ighEVQqRN5syYt1lhjz20ZhMgK4= zuul@np0005558316
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:51:43 np0005558317 sudo[29693]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:43 np0005558317 sudo[29719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutudyminjusawkblfxwlyiyqgwwbalf ; /usr/bin/python3'
Dec 13 06:51:43 np0005558317 sudo[29719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:44 np0005558317 python3[29721]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005558317 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 13 06:51:44 np0005558317 useradd[29723]: new group: name=cloud-admin, GID=1002
Dec 13 06:51:44 np0005558317 useradd[29723]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 13 06:51:44 np0005558317 sudo[29719]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:44 np0005558317 sudo[29753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iijqfiuafjkdbtyqeqywaymjpuasgfpd ; /usr/bin/python3'
Dec 13 06:51:44 np0005558317 sudo[29753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:44 np0005558317 python3[29755]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnHqEnifdMogqe2koi3kf/8MOGYg7doIt/u/Zi+s0YfOOikjBYd243liAj5ighEVQqRN5syYt1lhjz20ZhMgK4= zuul@np0005558316
                                              manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 06:51:44 np0005558317 sudo[29753]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:44 np0005558317 sudo[29831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okydgjzmuuilagxskrlhapmuqytdpmfu ; /usr/bin/python3'
Dec 13 06:51:44 np0005558317 sudo[29831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:44 np0005558317 python3[29833]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:51:44 np0005558317 sudo[29831]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:44 np0005558317 sudo[29904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eandlwjtqptbndqmevtqomjmmgmfurpn ; /usr/bin/python3'
Dec 13 06:51:44 np0005558317 sudo[29904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:44 np0005558317 python3[29906]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608704.4171355-137-252240559304452/source _original_basename=tmp_boh_12k follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:51:44 np0005558317 sudo[29904]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:45 np0005558317 sudo[29954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwpimfyrzogolzgxpdxfpgajlyvdotk ; /usr/bin/python3'
Dec 13 06:51:45 np0005558317 sudo[29954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:51:45 np0005558317 python3[29956]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 13 06:51:45 np0005558317 systemd[1]: Starting Hostname Service...
Dec 13 06:51:45 np0005558317 systemd[1]: Started Hostname Service.
Dec 13 06:51:45 np0005558317 systemd-hostnamed[29960]: Changed pretty hostname to 'compute-0'
Dec 13 06:51:45 compute-0 systemd-hostnamed[29960]: Hostname set to <compute-0> (static)
Dec 13 06:51:45 compute-0 NetworkManager[7245]: <info>  [1765608705.6635] hostname: static hostname changed from "np0005558317" to "compute-0"
Dec 13 06:51:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 06:51:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 06:51:45 compute-0 sudo[29954]: pam_unix(sudo:session): session closed for user root
Dec 13 06:51:45 compute-0 sshd-session[29645]: Connection closed by 192.168.25.12 port 54758
Dec 13 06:51:45 compute-0 sshd-session[29642]: pam_unix(sshd:session): session closed for user zuul
Dec 13 06:51:45 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 06:51:45 compute-0 systemd[1]: session-5.scope: Consumed 1.641s CPU time.
Dec 13 06:51:45 compute-0 systemd-logind[745]: Session 5 logged out. Waiting for processes to exit.
Dec 13 06:51:45 compute-0 systemd-logind[745]: Removed session 5.
Dec 13 06:51:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 06:52:15 compute-0 systemd[1]: Starting dnf makecache...
Dec 13 06:52:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 06:52:15 compute-0 dnf[29973]: Failed determining last makecache time.
Dec 13 06:52:17 compute-0 dnf[29973]: CentOS Stream 9 - BaseOS                        5.1 kB/s | 7.3 kB     00:01
Dec 13 06:52:19 compute-0 dnf[29973]: CentOS Stream 9 - AppStream                     3.4 kB/s | 7.8 kB     00:02
Dec 13 06:52:20 compute-0 dnf[29973]: CentOS Stream 9 - CRB                            17 kB/s | 7.2 kB     00:00
Dec 13 06:52:21 compute-0 dnf[29973]: CentOS Stream 9 - Extras packages               8.3 kB/s | 8.3 kB     00:00
Dec 13 06:52:21 compute-0 dnf[29973]: Metadata cache created.
Dec 13 06:52:21 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 06:52:21 compute-0 systemd[1]: Finished dnf makecache.
Dec 13 06:55:34 compute-0 sshd-session[29985]: Accepted publickey for zuul from 192.168.25.167 port 60932 ssh2: RSA SHA256:6D1WjYOFjoFBsumnInA3EGvtTfCaVlI9gahR8Wfk2Jc
Dec 13 06:55:34 compute-0 systemd-logind[745]: New session 6 of user zuul.
Dec 13 06:55:34 compute-0 systemd[1]: Started Session 6 of User zuul.
Dec 13 06:55:34 compute-0 sshd-session[29985]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 06:55:34 compute-0 python3[30061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 06:55:35 compute-0 sudo[30171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilcoeyeynemdjhsmxvemtjsfzovkfqy ; /usr/bin/python3'
Dec 13 06:55:35 compute-0 sudo[30171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:36 compute-0 python3[30173]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:36 compute-0 sudo[30171]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:36 compute-0 sudo[30244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lollfkrpqercypftnguqkyagvennarhy ; /usr/bin/python3'
Dec 13 06:55:36 compute-0 sudo[30244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:36 compute-0 python3[30246]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=delorean.repo follow=False checksum=619eee7d4b000c2fdbd89639e9af5cd9cd1e4284 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:36 compute-0 sudo[30244]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:36 compute-0 sudo[30270]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnglhiixfypomswnxbftdzpuipdswvav ; /usr/bin/python3'
Dec 13 06:55:36 compute-0 sudo[30270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:36 compute-0 python3[30272]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:36 compute-0 sudo[30270]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:36 compute-0 sudo[30343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfofxfmxkewyaoltulbcuihuhjdqjqr ; /usr/bin/python3'
Dec 13 06:55:36 compute-0 sudo[30343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:36 compute-0 python3[30345]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=32cab4d7d3069e03e1e375a1684f22cb2eb72603 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:36 compute-0 sudo[30343]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:36 compute-0 sudo[30369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shzmnipsdetannybwbmtqnslsmhkklcu ; /usr/bin/python3'
Dec 13 06:55:36 compute-0 sudo[30369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:37 compute-0 python3[30371]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:37 compute-0 sudo[30369]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:37 compute-0 sudo[30442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aflguycvfrcvccsbnjutqxnezkytqgwm ; /usr/bin/python3'
Dec 13 06:55:37 compute-0 sudo[30442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:37 compute-0 python3[30444]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=5c739387d960f7119f9d22475c90dcd56f13e885 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:37 compute-0 sudo[30442]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:37 compute-0 sudo[30468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpaidemjsfagoubfyivmyihbsoiszlms ; /usr/bin/python3'
Dec 13 06:55:37 compute-0 sudo[30468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:37 compute-0 python3[30470]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:37 compute-0 sudo[30468]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:37 compute-0 sudo[30541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnyakkuzorpnwintshncrlxlbrjwtmw ; /usr/bin/python3'
Dec 13 06:55:37 compute-0 sudo[30541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:37 compute-0 python3[30543]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=8c00581855ef07972e002c82cc33b7b03ecccc44 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:37 compute-0 sudo[30541]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:37 compute-0 sudo[30567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhltgmgcurmkgnrdfgpmzbjeyloyejv ; /usr/bin/python3'
Dec 13 06:55:37 compute-0 sudo[30567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:37 compute-0 python3[30569]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:37 compute-0 sudo[30567]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:38 compute-0 sudo[30640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cawyxdcqqqsixsdtugrvjxatjtommuvc ; /usr/bin/python3'
Dec 13 06:55:38 compute-0 sudo[30640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:38 compute-0 python3[30642]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=5515871802d2268513e691cf460c59c7da7132f9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:38 compute-0 sudo[30640]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:38 compute-0 sudo[30666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurqhtzrlxhwhexnoucnuwnkzcfpecal ; /usr/bin/python3'
Dec 13 06:55:38 compute-0 sudo[30666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:38 compute-0 python3[30668]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:38 compute-0 sudo[30666]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:38 compute-0 sudo[30739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkvpsybonjbotjuuzfuanhpfkcrwliwn ; /usr/bin/python3'
Dec 13 06:55:38 compute-0 sudo[30739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:38 compute-0 python3[30741]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=c87c0371a768c46886c8904021e8b85df789a625 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:38 compute-0 sudo[30739]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:38 compute-0 sudo[30765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ainlxrdpydmrsjsooqwwgbpafabyzzvq ; /usr/bin/python3'
Dec 13 06:55:38 compute-0 sudo[30765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:38 compute-0 python3[30767]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 06:55:38 compute-0 sudo[30765]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:38 compute-0 sudo[30838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvysizggnqrtztrevvpjmkxabyhilufp ; /usr/bin/python3'
Dec 13 06:55:38 compute-0 sudo[30838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 06:55:39 compute-0 python3[30840]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765608935.8850853-33999-143407617000061/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 06:55:39 compute-0 sudo[30838]: pam_unix(sudo:session): session closed for user root
Dec 13 06:55:40 compute-0 sshd-session[30865]: Connection closed by 192.168.122.11 port 34640 [preauth]
Dec 13 06:55:40 compute-0 sshd-session[30866]: Connection closed by 192.168.122.11 port 34642 [preauth]
Dec 13 06:55:40 compute-0 sshd-session[30867]: Unable to negotiate with 192.168.122.11 port 34644: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 13 06:55:40 compute-0 sshd-session[30868]: Unable to negotiate with 192.168.122.11 port 34652: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 13 06:55:40 compute-0 sshd-session[30869]: Unable to negotiate with 192.168.122.11 port 34664: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 13 06:55:51 compute-0 python3[30898]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 06:57:37 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 13 06:57:37 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 13 06:57:37 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 13 06:57:37 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 13 07:00:50 compute-0 sshd-session[29988]: Received disconnect from 192.168.25.167 port 60932:11: disconnected by user
Dec 13 07:00:50 compute-0 sshd-session[29988]: Disconnected from user zuul 192.168.25.167 port 60932
Dec 13 07:00:50 compute-0 sshd-session[29985]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:00:50 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 07:00:50 compute-0 systemd[1]: session-6.scope: Consumed 3.523s CPU time.
Dec 13 07:00:50 compute-0 systemd-logind[745]: Session 6 logged out. Waiting for processes to exit.
Dec 13 07:00:50 compute-0 systemd-logind[745]: Removed session 6.
Dec 13 07:01:01 compute-0 CROND[30905]: (root) CMD (run-parts /etc/cron.hourly)
Dec 13 07:01:01 compute-0 run-parts[30908]: (/etc/cron.hourly) starting 0anacron
Dec 13 07:01:01 compute-0 anacron[30916]: Anacron started on 2025-12-13
Dec 13 07:01:02 compute-0 anacron[30916]: Will run job `cron.daily' in 28 min.
Dec 13 07:01:02 compute-0 anacron[30916]: Will run job `cron.weekly' in 48 min.
Dec 13 07:01:02 compute-0 anacron[30916]: Will run job `cron.monthly' in 68 min.
Dec 13 07:01:02 compute-0 anacron[30916]: Jobs will be executed sequentially
Dec 13 07:01:02 compute-0 run-parts[30918]: (/etc/cron.hourly) finished 0anacron
Dec 13 07:01:02 compute-0 CROND[30904]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 13 07:05:42 compute-0 sshd-session[30919]: Accepted publickey for zuul from 192.168.122.30 port 54152 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:05:42 compute-0 systemd-logind[745]: New session 7 of user zuul.
Dec 13 07:05:42 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 13 07:05:42 compute-0 sshd-session[30919]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:05:42 compute-0 python3.9[31072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:05:43 compute-0 sudo[31251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fatgkgnnkborhhyndsutqgtqzoonvztm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609543.4151187-32-178747101219360/AnsiballZ_command.py'
Dec 13 07:05:43 compute-0 sudo[31251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:05:43 compute-0 python3.9[31253]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:05:52 compute-0 sudo[31251]: pam_unix(sudo:session): session closed for user root
Dec 13 07:05:52 compute-0 sshd-session[30922]: Connection closed by 192.168.122.30 port 54152
Dec 13 07:05:52 compute-0 sshd-session[30919]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:05:52 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 07:05:52 compute-0 systemd[1]: session-7.scope: Consumed 6.281s CPU time.
Dec 13 07:05:52 compute-0 systemd-logind[745]: Session 7 logged out. Waiting for processes to exit.
Dec 13 07:05:52 compute-0 systemd-logind[745]: Removed session 7.
Dec 13 07:06:08 compute-0 sshd-session[31310]: Accepted publickey for zuul from 192.168.122.30 port 48678 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:06:08 compute-0 systemd-logind[745]: New session 8 of user zuul.
Dec 13 07:06:08 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 13 07:06:08 compute-0 sshd-session[31310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:06:08 compute-0 python3.9[31463]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 07:06:09 compute-0 python3.9[31637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:06:10 compute-0 sudo[31788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjioyblayaqiqgugfczpcziegatdfrbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609569.899311-45-271933400434929/AnsiballZ_command.py'
Dec 13 07:06:10 compute-0 sudo[31788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:10 compute-0 python3.9[31790]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:06:10 compute-0 sudo[31788]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:10 compute-0 sudo[31941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegutpzytibrtlwnnyzbyhlryunnpeeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609570.5886729-57-213235550322871/AnsiballZ_stat.py'
Dec 13 07:06:10 compute-0 sudo[31941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:11 compute-0 python3.9[31943]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:06:11 compute-0 sudo[31941]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:11 compute-0 sudo[32093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbufstjpjnkhihrxtwilxkxqjxaeaoja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609571.1268928-65-45141775977584/AnsiballZ_file.py'
Dec 13 07:06:11 compute-0 sudo[32093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:11 compute-0 python3.9[32095]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:06:11 compute-0 sudo[32093]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:11 compute-0 sudo[32245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznyztgojpjdzrmdbnldiyqksxqrznac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609571.6877418-73-52532129282652/AnsiballZ_stat.py'
Dec 13 07:06:11 compute-0 sudo[32245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:12 compute-0 python3.9[32247]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:06:12 compute-0 sudo[32245]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:12 compute-0 sudo[32368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqqkczyhrzukvpaqjbixhncxkojbswh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609571.6877418-73-52532129282652/AnsiballZ_copy.py'
Dec 13 07:06:12 compute-0 sudo[32368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:12 compute-0 python3.9[32370]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609571.6877418-73-52532129282652/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:06:12 compute-0 sudo[32368]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:12 compute-0 sudo[32520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvhekrgpoejxtzqcrnvzslujkrzzsde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609572.653376-88-183954963131040/AnsiballZ_setup.py'
Dec 13 07:06:12 compute-0 sudo[32520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:13 compute-0 python3.9[32522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:06:13 compute-0 sudo[32520]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:13 compute-0 sudo[32676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvzpxbrrrhhfbyfpmpirmtejmynvpljd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609573.4592595-96-40219555147488/AnsiballZ_file.py'
Dec 13 07:06:13 compute-0 sudo[32676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:13 compute-0 python3.9[32678]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:06:13 compute-0 sudo[32676]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:14 compute-0 sudo[32828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diizmodarcuwrukmgcktnyjjbklthqwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609573.9475179-105-60507863436350/AnsiballZ_file.py'
Dec 13 07:06:14 compute-0 sudo[32828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:14 compute-0 python3.9[32830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:06:14 compute-0 sudo[32828]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:14 compute-0 python3.9[32980]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:06:16 compute-0 python3.9[33233]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:06:17 compute-0 python3.9[33383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:06:18 compute-0 python3.9[33537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:06:18 compute-0 sudo[33693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xriqbqyovvalyytltyqrryeeyaepnbbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609578.6899774-153-258375277034307/AnsiballZ_setup.py'
Dec 13 07:06:18 compute-0 sudo[33693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:19 compute-0 python3.9[33695]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:06:19 compute-0 sudo[33693]: pam_unix(sudo:session): session closed for user root
Dec 13 07:06:19 compute-0 sudo[33777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stpzlrzmruzomdqtzoqquadyffnjubyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609578.6899774-153-258375277034307/AnsiballZ_dnf.py'
Dec 13 07:06:19 compute-0 sudo[33777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:06:19 compute-0 python3.9[33779]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:07:55 compute-0 systemd[1]: Reloading.
Dec 13 07:07:55 compute-0 systemd-rc-local-generator[33981]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:07:56 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 13 07:07:56 compute-0 systemd[1]: Reloading.
Dec 13 07:07:56 compute-0 systemd-rc-local-generator[34018]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:07:56 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 13 07:07:56 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 13 07:07:56 compute-0 systemd[1]: Reloading.
Dec 13 07:07:56 compute-0 systemd-rc-local-generator[34057]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:07:56 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 13 07:07:56 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:07:56 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:07:56 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:08:41 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 07:08:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 07:08:41 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 13 07:08:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:08:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:08:41 compute-0 systemd[1]: Reloading.
Dec 13 07:08:41 compute-0 systemd-rc-local-generator[34370]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:08:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:08:42 compute-0 sudo[33777]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:08:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:08:42 compute-0 systemd[1]: run-r4e83c113347a4f54ab484307fa3d7a79.service: Deactivated successfully.
Dec 13 07:08:42 compute-0 sudo[35284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmaewziovjlyewkicnkwhkrzxclmvuhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609722.3206875-165-11205929501088/AnsiballZ_command.py'
Dec 13 07:08:42 compute-0 sudo[35284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:42 compute-0 python3.9[35286]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:08:43 compute-0 sudo[35284]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:43 compute-0 sudo[35565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyprmmmtivkjieheeoetdjjtzfdawwdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609723.4444325-173-17054577559247/AnsiballZ_selinux.py'
Dec 13 07:08:43 compute-0 sudo[35565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:44 compute-0 python3.9[35567]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 07:08:44 compute-0 sudo[35565]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:44 compute-0 sudo[35717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjysviaboidkupxtabwozvaijrneqhri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609724.3899713-184-213181661500552/AnsiballZ_command.py'
Dec 13 07:08:44 compute-0 sudo[35717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:44 compute-0 python3.9[35719]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 07:08:45 compute-0 sudo[35717]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:45 compute-0 sudo[35870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqjkiuevmdiegqgtifmtbkbioczzlrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609725.4217253-192-95343606371064/AnsiballZ_file.py'
Dec 13 07:08:45 compute-0 sudo[35870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:46 compute-0 python3.9[35872]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:08:46 compute-0 sudo[35870]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:47 compute-0 sudo[36022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjswdndpxytwyrxsxolbgvkrvolivqfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609726.6916482-200-134799826015022/AnsiballZ_mount.py'
Dec 13 07:08:47 compute-0 sudo[36022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:47 compute-0 python3.9[36024]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 07:08:47 compute-0 sudo[36022]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:47 compute-0 sudo[36174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomhdhzqwnbpvatcivwacfszuiarkmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609727.7138093-228-72103778951051/AnsiballZ_file.py'
Dec 13 07:08:47 compute-0 sudo[36174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:48 compute-0 python3.9[36176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:08:48 compute-0 sudo[36174]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:48 compute-0 sudo[36326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoxyqfndmjpeuvebgxtpgaydlqpvwkpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609728.1732094-236-97114366218405/AnsiballZ_stat.py'
Dec 13 07:08:48 compute-0 sudo[36326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:48 compute-0 python3.9[36328]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:08:48 compute-0 sudo[36326]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:48 compute-0 sudo[36449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjgumfsoasoczyahehepwztlzmlhqitd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609728.1732094-236-97114366218405/AnsiballZ_copy.py'
Dec 13 07:08:48 compute-0 sudo[36449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:48 compute-0 python3.9[36451]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609728.1732094-236-97114366218405/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:08:48 compute-0 sudo[36449]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:49 compute-0 sudo[36601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnebaxlsvwabnxgmoozkiahpzksuqaai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609729.238591-260-136121868393376/AnsiballZ_stat.py'
Dec 13 07:08:49 compute-0 sudo[36601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:49 compute-0 python3.9[36603]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:08:49 compute-0 sudo[36601]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:49 compute-0 sudo[36753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehztnwlkpfdyioivhxgdicysvqxqrztk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609729.691893-268-61807939778822/AnsiballZ_command.py'
Dec 13 07:08:49 compute-0 sudo[36753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:50 compute-0 python3.9[36755]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:08:50 compute-0 sudo[36753]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:50 compute-0 sudo[36906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhucvobjesuepiyupouuskxndwzpxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609730.174529-276-190671422383851/AnsiballZ_file.py'
Dec 13 07:08:50 compute-0 sudo[36906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:50 compute-0 python3.9[36908]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:08:50 compute-0 sudo[36906]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:51 compute-0 sudo[37058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieldftothuglkiallrxzsfzehunezehl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609730.7738025-287-151901465047323/AnsiballZ_getent.py'
Dec 13 07:08:51 compute-0 sudo[37058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:53 compute-0 python3.9[37060]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 07:08:53 compute-0 sudo[37058]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:53 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:08:53 compute-0 sudo[37212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnsbpsgpxzmuacxqzirqstiwrurtejsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609733.3403676-295-132065355767462/AnsiballZ_group.py'
Dec 13 07:08:53 compute-0 sudo[37212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:53 compute-0 python3.9[37214]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 07:08:53 compute-0 groupadd[37215]: group added to /etc/group: name=qemu, GID=107
Dec 13 07:08:53 compute-0 groupadd[37215]: group added to /etc/gshadow: name=qemu
Dec 13 07:08:53 compute-0 groupadd[37215]: new group: name=qemu, GID=107
Dec 13 07:08:53 compute-0 sudo[37212]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:54 compute-0 sudo[37370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmjrfjvixogxpspgxytcspwzmoijsed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609733.928531-303-68994786232276/AnsiballZ_user.py'
Dec 13 07:08:54 compute-0 sudo[37370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:54 compute-0 python3.9[37372]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 07:08:54 compute-0 useradd[37374]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 13 07:08:54 compute-0 sudo[37370]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:54 compute-0 sudo[37530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxuqkdssodtlhtpsadpshzdgthlhhty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609734.5959277-311-164593264532166/AnsiballZ_getent.py'
Dec 13 07:08:54 compute-0 sudo[37530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:54 compute-0 python3.9[37532]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 07:08:54 compute-0 sudo[37530]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:55 compute-0 sudo[37683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffuehdqtgxkspihitlvdorkamucorxtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609735.0793219-319-76619870290964/AnsiballZ_group.py'
Dec 13 07:08:55 compute-0 sudo[37683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:55 compute-0 python3.9[37685]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 07:08:55 compute-0 groupadd[37686]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 13 07:08:55 compute-0 groupadd[37686]: group added to /etc/gshadow: name=hugetlbfs
Dec 13 07:08:55 compute-0 groupadd[37686]: new group: name=hugetlbfs, GID=42477
Dec 13 07:08:55 compute-0 sudo[37683]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:55 compute-0 sudo[37841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwxallomfekpiemscigscnqovdhswkzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609735.6158123-328-240673288497833/AnsiballZ_file.py'
Dec 13 07:08:55 compute-0 sudo[37841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:55 compute-0 python3.9[37843]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 07:08:55 compute-0 sudo[37841]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:56 compute-0 sudo[37993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbrsbrnbqbjsqubfbhjserbzpbzqaqmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609736.2050867-339-108958921474548/AnsiballZ_dnf.py'
Dec 13 07:08:56 compute-0 sudo[37993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:56 compute-0 python3.9[37995]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:08:57 compute-0 sudo[37993]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:58 compute-0 sudo[38146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srvlzpoxoffvigpgbzgunydgexetgwer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609737.8944657-347-37284117803878/AnsiballZ_file.py'
Dec 13 07:08:58 compute-0 sudo[38146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:58 compute-0 python3.9[38148]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:08:58 compute-0 sudo[38146]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:58 compute-0 sudo[38298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acblafwiozvtoqngvmezznwkmejemnvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609738.3757422-355-129980387542290/AnsiballZ_stat.py'
Dec 13 07:08:58 compute-0 sudo[38298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:58 compute-0 python3.9[38300]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:08:58 compute-0 sudo[38298]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:58 compute-0 sudo[38421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbwzcrskcnporcibbaldsjzlkwfzdvle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609738.3757422-355-129980387542290/AnsiballZ_copy.py'
Dec 13 07:08:58 compute-0 sudo[38421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:59 compute-0 python3.9[38423]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609738.3757422-355-129980387542290/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:08:59 compute-0 sudo[38421]: pam_unix(sudo:session): session closed for user root
Dec 13 07:08:59 compute-0 sudo[38573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uigbwchvzrkunoftmkxtjecvrxbgdgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609739.2712305-370-181434060614410/AnsiballZ_systemd.py'
Dec 13 07:08:59 compute-0 sudo[38573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:08:59 compute-0 python3.9[38575]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:08:59 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 13 07:09:00 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 07:09:00 compute-0 kernel: Bridge firewalling registered
Dec 13 07:09:00 compute-0 systemd-modules-load[38579]: Inserted module 'br_netfilter'
Dec 13 07:09:00 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 13 07:09:00 compute-0 sudo[38573]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:00 compute-0 sudo[38733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvchoyloqeqpqudglxvxdmvygddjgcqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609740.1776235-378-59125076682533/AnsiballZ_stat.py'
Dec 13 07:09:00 compute-0 sudo[38733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:00 compute-0 python3.9[38735]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:09:00 compute-0 sudo[38733]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:00 compute-0 sudo[38856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqlxsakdfvmtcqdazfntcodfanbhuduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609740.1776235-378-59125076682533/AnsiballZ_copy.py'
Dec 13 07:09:00 compute-0 sudo[38856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:00 compute-0 python3.9[38858]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609740.1776235-378-59125076682533/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:09:00 compute-0 sudo[38856]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:01 compute-0 sudo[39008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvervapdjkgfdtdidbfavsnmsibewxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609741.1895232-396-28302399861168/AnsiballZ_dnf.py'
Dec 13 07:09:01 compute-0 sudo[39008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:01 compute-0 python3.9[39010]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:09:07 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:09:07 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:09:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:09:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:09:07 compute-0 systemd[1]: Reloading.
Dec 13 07:09:08 compute-0 systemd-rc-local-generator[39068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:09:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:09:08 compute-0 sudo[39008]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:09 compute-0 python3.9[40256]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:09:09 compute-0 python3.9[41371]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 07:09:10 compute-0 python3.9[42169]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:09:10 compute-0 sudo[43094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtnuwdumuebwnzootqcyagsceoqpufpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609750.2745078-435-196857660679510/AnsiballZ_command.py'
Dec 13 07:09:10 compute-0 sudo[43094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:09:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:09:10 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.229s CPU time.
Dec 13 07:09:10 compute-0 systemd[1]: run-r10fa14ae4b894295b2e4cb6242b36709.service: Deactivated successfully.
Dec 13 07:09:10 compute-0 python3.9[43128]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:09:10 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 07:09:10 compute-0 systemd[1]: Starting Authorization Manager...
Dec 13 07:09:11 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 07:09:11 compute-0 polkitd[43388]: Started polkitd version 0.117
Dec 13 07:09:11 compute-0 polkitd[43388]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 07:09:11 compute-0 polkitd[43388]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 07:09:11 compute-0 polkitd[43388]: Finished loading, compiling and executing 2 rules
Dec 13 07:09:11 compute-0 systemd[1]: Started Authorization Manager.
Dec 13 07:09:11 compute-0 polkitd[43388]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 13 07:09:11 compute-0 sudo[43094]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:11 compute-0 sudo[43552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnrtwcwdqhoxkrxfsdxvxpdgeinyraym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609751.2912264-444-207144698590507/AnsiballZ_systemd.py'
Dec 13 07:09:11 compute-0 sudo[43552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:11 compute-0 python3.9[43554]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:09:11 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 07:09:11 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 07:09:11 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 07:09:11 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 07:09:12 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 07:09:12 compute-0 sudo[43552]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:12 compute-0 python3.9[43715]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 07:09:13 compute-0 sudo[43865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgkyoqvsrnizhpmmxxwywltlsqdtikp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609753.6858213-501-203543084461319/AnsiballZ_systemd.py'
Dec 13 07:09:13 compute-0 sudo[43865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:14 compute-0 python3.9[43867]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:09:14 compute-0 systemd[1]: Reloading.
Dec 13 07:09:14 compute-0 systemd-rc-local-generator[43888]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:09:14 compute-0 sudo[43865]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:14 compute-0 sudo[44055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qozmucqaosjnsqicglujkduotnglbflg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609754.4928718-501-189750904145270/AnsiballZ_systemd.py'
Dec 13 07:09:14 compute-0 sudo[44055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:14 compute-0 python3.9[44057]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:09:15 compute-0 systemd[1]: Reloading.
Dec 13 07:09:15 compute-0 systemd-rc-local-generator[44080]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:09:15 compute-0 sudo[44055]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:15 compute-0 sudo[44244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqgtmoiokbwufgdvkofgznegxkrowqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609755.3657765-517-151947601165300/AnsiballZ_command.py'
Dec 13 07:09:15 compute-0 sudo[44244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:15 compute-0 python3.9[44246]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:09:15 compute-0 sudo[44244]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:16 compute-0 sudo[44397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekfhgbbvgkftuqygtondznfrpciauyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609755.87847-525-56962662112720/AnsiballZ_command.py'
Dec 13 07:09:16 compute-0 sudo[44397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:16 compute-0 python3.9[44399]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:09:16 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 13 07:09:16 compute-0 sudo[44397]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:16 compute-0 sudo[44550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqplnofhwxuotmndrvbzsmemeudxeoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609756.4021657-533-165473827124052/AnsiballZ_command.py'
Dec 13 07:09:16 compute-0 sudo[44550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:16 compute-0 python3.9[44552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:09:17 compute-0 sudo[44550]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:18 compute-0 sudo[44712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdaeuyrjsdnasknizlxmpxomituwawaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609758.0217273-541-47767210443011/AnsiballZ_command.py'
Dec 13 07:09:18 compute-0 sudo[44712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:18 compute-0 python3.9[44714]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:09:18 compute-0 sudo[44712]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:18 compute-0 sudo[44865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgtxacfyjvnbdddoglfooefghdjguxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609758.5158327-549-245362133630212/AnsiballZ_systemd.py'
Dec 13 07:09:18 compute-0 sudo[44865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:18 compute-0 python3.9[44867]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:09:18 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 07:09:18 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 13 07:09:18 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 13 07:09:19 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 13 07:09:19 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 07:09:19 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 13 07:09:19 compute-0 sudo[44865]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:19 compute-0 sshd-session[31313]: Connection closed by 192.168.122.30 port 48678
Dec 13 07:09:19 compute-0 sshd-session[31310]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:09:19 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 07:09:19 compute-0 systemd[1]: session-8.scope: Consumed 1min 41.309s CPU time.
Dec 13 07:09:19 compute-0 systemd-logind[745]: Session 8 logged out. Waiting for processes to exit.
Dec 13 07:09:19 compute-0 systemd-logind[745]: Removed session 8.
Dec 13 07:09:24 compute-0 sshd-session[44898]: Accepted publickey for zuul from 192.168.122.30 port 58640 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:09:24 compute-0 systemd-logind[745]: New session 9 of user zuul.
Dec 13 07:09:24 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 13 07:09:24 compute-0 sshd-session[44898]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:09:25 compute-0 python3.9[45051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:09:26 compute-0 sudo[45205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiaylqydezmicjulsyofwdxnmozdggkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609765.7086265-36-54918173310108/AnsiballZ_getent.py'
Dec 13 07:09:26 compute-0 sudo[45205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:26 compute-0 python3.9[45207]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 07:09:26 compute-0 sudo[45205]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:26 compute-0 sudo[45358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsojuwdxnkzxlaeedrsbesuuzdxbkidr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609766.3457732-44-123238482943916/AnsiballZ_group.py'
Dec 13 07:09:26 compute-0 sudo[45358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:26 compute-0 python3.9[45360]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 07:09:26 compute-0 groupadd[45361]: group added to /etc/group: name=openvswitch, GID=42476
Dec 13 07:09:26 compute-0 groupadd[45361]: group added to /etc/gshadow: name=openvswitch
Dec 13 07:09:26 compute-0 groupadd[45361]: new group: name=openvswitch, GID=42476
Dec 13 07:09:26 compute-0 sudo[45358]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:27 compute-0 sudo[45516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsqagadonsyulmdjkpdtkktizpvxkhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609767.024153-52-260975306103664/AnsiballZ_user.py'
Dec 13 07:09:27 compute-0 sudo[45516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:27 compute-0 python3.9[45518]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 07:09:27 compute-0 useradd[45520]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 13 07:09:27 compute-0 useradd[45520]: add 'openvswitch' to group 'hugetlbfs'
Dec 13 07:09:27 compute-0 useradd[45520]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 13 07:09:27 compute-0 sudo[45516]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:28 compute-0 sudo[45676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylegqltwuduqgpfslpjzhbswjjorecta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609767.8041623-62-75956113307481/AnsiballZ_setup.py'
Dec 13 07:09:28 compute-0 sudo[45676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:28 compute-0 python3.9[45678]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:09:28 compute-0 sudo[45676]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:28 compute-0 sudo[45760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlvzetowuwedbpbntxzxnadogqhhfaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609767.8041623-62-75956113307481/AnsiballZ_dnf.py'
Dec 13 07:09:28 compute-0 sudo[45760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:28 compute-0 python3.9[45762]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 07:09:34 compute-0 sudo[45760]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:35 compute-0 sudo[45926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljihxujtsugvhroreqjgvnuxujbtzwsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609774.8847432-76-163336050386948/AnsiballZ_dnf.py'
Dec 13 07:09:35 compute-0 sudo[45926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:35 compute-0 python3.9[45928]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:09:44 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 07:09:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 07:09:44 compute-0 groupadd[45951]: group added to /etc/group: name=unbound, GID=993
Dec 13 07:09:44 compute-0 groupadd[45951]: group added to /etc/gshadow: name=unbound
Dec 13 07:09:44 compute-0 groupadd[45951]: new group: name=unbound, GID=993
Dec 13 07:09:44 compute-0 useradd[45958]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 13 07:09:44 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 13 07:09:44 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 13 07:09:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:09:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:09:45 compute-0 systemd[1]: Reloading.
Dec 13 07:09:45 compute-0 systemd-rc-local-generator[46450]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:09:45 compute-0 systemd-sysv-generator[46453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:09:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:09:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:09:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:09:46 compute-0 systemd[1]: run-r6c88459f782f4b73970d8fe064708acb.service: Deactivated successfully.
Dec 13 07:09:46 compute-0 sudo[45926]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:46 compute-0 sudo[47023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clycuaunzvuuyhhqocezrrjogsrecduh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609786.3214786-84-99138967933033/AnsiballZ_systemd.py'
Dec 13 07:09:46 compute-0 sudo[47023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:47 compute-0 python3.9[47025]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:09:47 compute-0 systemd[1]: Reloading.
Dec 13 07:09:47 compute-0 systemd-rc-local-generator[47052]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:09:47 compute-0 systemd-sysv-generator[47055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:09:47 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 13 07:09:47 compute-0 chown[47067]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 13 07:09:47 compute-0 ovs-ctl[47072]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 13 07:09:47 compute-0 ovs-ctl[47072]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 13 07:09:47 compute-0 ovs-ctl[47072]: Starting ovsdb-server [  OK  ]
Dec 13 07:09:47 compute-0 ovs-vsctl[47121]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 13 07:09:47 compute-0 ovs-vsctl[47141]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"075cc82e-193d-47f2-a248-9917472f5475\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 13 07:09:47 compute-0 ovs-ctl[47072]: Configuring Open vSwitch system IDs [  OK  ]
Dec 13 07:09:47 compute-0 ovs-ctl[47072]: Enabling remote OVSDB managers [  OK  ]
Dec 13 07:09:47 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 13 07:09:47 compute-0 ovs-vsctl[47147]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 07:09:47 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 13 07:09:47 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 13 07:09:47 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 13 07:09:47 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 13 07:09:47 compute-0 ovs-ctl[47191]: Inserting openvswitch module [  OK  ]
Dec 13 07:09:47 compute-0 ovs-ctl[47160]: Starting ovs-vswitchd [  OK  ]
Dec 13 07:09:47 compute-0 ovs-ctl[47160]: Enabling remote OVSDB managers [  OK  ]
Dec 13 07:09:47 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 13 07:09:47 compute-0 ovs-vsctl[47209]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 07:09:47 compute-0 systemd[1]: Starting Open vSwitch...
Dec 13 07:09:47 compute-0 systemd[1]: Finished Open vSwitch.
Dec 13 07:09:47 compute-0 sudo[47023]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:48 compute-0 python3.9[47360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:09:48 compute-0 sudo[47510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkrwzrcoafiuxlzbertohpikeyljahlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609788.5845225-102-128492978615815/AnsiballZ_sefcontext.py'
Dec 13 07:09:48 compute-0 sudo[47510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:49 compute-0 python3.9[47512]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 07:09:49 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 07:09:49 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 07:09:50 compute-0 sudo[47510]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:50 compute-0 python3.9[47667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:09:51 compute-0 sudo[47823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clzepyvmnwizliuabknhebzlrmsgtjiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609791.0014267-120-27789720637615/AnsiballZ_dnf.py'
Dec 13 07:09:51 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 13 07:09:51 compute-0 sudo[47823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:51 compute-0 python3.9[47825]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:09:52 compute-0 sudo[47823]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:52 compute-0 sudo[47976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hljakqcakniodopucfrcwdvidmrsyhee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609792.5515714-128-159949259665717/AnsiballZ_command.py'
Dec 13 07:09:52 compute-0 sudo[47976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:52 compute-0 python3.9[47978]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:09:53 compute-0 sudo[47976]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:54 compute-0 sudo[48263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ateajsbvoipjmmqoxvqwhbhpvwuidopr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609793.6994243-136-148701999484650/AnsiballZ_file.py'
Dec 13 07:09:54 compute-0 sudo[48263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:54 compute-0 python3.9[48265]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 07:09:54 compute-0 sudo[48263]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:54 compute-0 python3.9[48415]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:09:55 compute-0 sudo[48567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heyjculqatjgbaooioocourasyzylpyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609794.95822-152-29523834299866/AnsiballZ_dnf.py'
Dec 13 07:09:55 compute-0 sudo[48567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:55 compute-0 python3.9[48569]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:09:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:09:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:09:58 compute-0 systemd[1]: Reloading.
Dec 13 07:09:58 compute-0 systemd-sysv-generator[48604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:09:58 compute-0 systemd-rc-local-generator[48601]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:09:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:09:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:09:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:09:59 compute-0 systemd[1]: run-r4d5decbe51dd49cbb85d3151aaad4b2d.service: Deactivated successfully.
Dec 13 07:09:59 compute-0 sudo[48567]: pam_unix(sudo:session): session closed for user root
Dec 13 07:09:59 compute-0 sudo[48883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axqfkentuxiogicihgimaweylbhdlygr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609799.2638607-160-181476080391076/AnsiballZ_systemd.py'
Dec 13 07:09:59 compute-0 sudo[48883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:09:59 compute-0 python3.9[48885]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:09:59 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 07:09:59 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 07:09:59 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 07:09:59 compute-0 systemd[1]: Stopping Network Manager...
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7463] caught SIGTERM, shutting down normally.
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7476] dhcp4 (eth0): canceled DHCP transaction
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7476] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7476] dhcp4 (eth0): state changed no lease
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7478] dhcp6 (eth0): canceled DHCP transaction
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7478] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7478] dhcp6 (eth0): state changed no lease
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7480] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 07:09:59 compute-0 NetworkManager[7245]: <info>  [1765609799.7504] exiting (success)
Dec 13 07:09:59 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 07:09:59 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 07:09:59 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 07:09:59 compute-0 systemd[1]: Stopped Network Manager.
Dec 13 07:09:59 compute-0 systemd[1]: Starting Network Manager...
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8019] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:7e7986d9-0598-4067-a630-6e2fad28fcbc)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8021] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8071] manager[0x557569987000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 07:09:59 compute-0 systemd[1]: Starting Hostname Service...
Dec 13 07:09:59 compute-0 systemd[1]: Started Hostname Service.
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8746] hostname: hostname: using hostnamed
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8746] hostname: static hostname changed from (none) to "compute-0"
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8749] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8754] manager[0x557569987000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8755] manager[0x557569987000]: rfkill: WWAN hardware radio set enabled
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8776] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8786] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8786] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8787] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8787] manager: Networking is enabled by state file
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8789] settings: Loaded settings plugin: keyfile (internal)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8792] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8814] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8824] dhcp: init: Using DHCP client 'internal'
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8826] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8830] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8833] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8839] device (lo): Activation: starting connection 'lo' (08c9145e-912a-4b86-86a1-5730fa82ae86)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8844] device (eth0): carrier: link connected
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8848] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8850] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8851] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8855] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8858] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8862] device (eth1): carrier: link connected
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8866] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8868] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0) (indicated)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8869] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8871] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8875] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8880] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 07:09:59 compute-0 systemd[1]: Started Network Manager.
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8886] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8888] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8889] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8890] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8891] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8893] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8906] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8909] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8916] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8922] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8928] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8930] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8936] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8940] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8944] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.8999] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.9002] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.9007] device (lo): Activation: successful, device activated.
Dec 13 07:09:59 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.9088] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.9094] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.9116] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 13 07:09:59 compute-0 NetworkManager[48896]: <info>  [1765609799.9118] device (eth1): Activation: successful, device activated.
Dec 13 07:09:59 compute-0 sudo[48883]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:00 compute-0 sudo[49092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfhzdntttvygnhbzommmlgdejjhkbbsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609800.0613534-168-69234969137966/AnsiballZ_dnf.py'
Dec 13 07:10:00 compute-0 sudo[49092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:00 compute-0 python3.9[49094]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9271] dhcp6 (eth0): state changed new lease, address=2001:db8::1cf
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9284] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9320] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9321] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9325] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9327] device (eth0): Activation: successful, device activated.
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9331] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 07:10:00 compute-0 NetworkManager[48896]: <info>  [1765609800.9334] manager: startup complete
Dec 13 07:10:00 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 13 07:10:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:10:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:10:07 compute-0 systemd[1]: Reloading.
Dec 13 07:10:07 compute-0 systemd-rc-local-generator[49159]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:10:07 compute-0 systemd-sysv-generator[49163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:10:07 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:10:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:10:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:10:07 compute-0 systemd[1]: run-rb2e60d43613742c98f546a7f916df477.service: Deactivated successfully.
Dec 13 07:10:07 compute-0 sudo[49092]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:08 compute-0 sudo[49570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymwftimybwyitqawipxhecsczdhyige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609808.1870978-180-139107466035876/AnsiballZ_stat.py'
Dec 13 07:10:08 compute-0 sudo[49570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:08 compute-0 python3.9[49572]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:10:08 compute-0 sudo[49570]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:09 compute-0 sudo[49722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyvgkuybqsrwyjwxqemlrotbwodgojeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609808.7197044-189-153135273413188/AnsiballZ_ini_file.py'
Dec 13 07:10:09 compute-0 sudo[49722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:09 compute-0 python3.9[49724]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:09 compute-0 sudo[49722]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:09 compute-0 sudo[49876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puzpdtzfjzidhgyhakqdpuzzukopdjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609809.4359016-199-273093976215487/AnsiballZ_ini_file.py'
Dec 13 07:10:09 compute-0 sudo[49876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:09 compute-0 python3.9[49878]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:09 compute-0 sudo[49876]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:10 compute-0 sudo[50028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxyepxqelcuhtlpjzmfkuxxofpsihwbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609809.8920717-199-110274115137692/AnsiballZ_ini_file.py'
Dec 13 07:10:10 compute-0 sudo[50028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:10 compute-0 python3.9[50030]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:10 compute-0 sudo[50028]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:10 compute-0 sudo[50182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzxiadzjqozqwozuztvvtyzvzkuxehqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609810.3938901-214-156602311442288/AnsiballZ_ini_file.py'
Dec 13 07:10:10 compute-0 sudo[50182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:10 compute-0 python3.9[50184]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:10 compute-0 sudo[50182]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:10 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 07:10:11 compute-0 sudo[50334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvfsopkmtwkkdbeuiexcjawnnyfcsgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609810.8703825-214-142060845996027/AnsiballZ_ini_file.py'
Dec 13 07:10:11 compute-0 sudo[50334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:11 compute-0 python3.9[50336]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:11 compute-0 sudo[50334]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:11 compute-0 sudo[50486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcsqhecktlarbretebrteoffftycqhks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609811.3657374-229-156092072474139/AnsiballZ_stat.py'
Dec 13 07:10:11 compute-0 sudo[50486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:11 compute-0 python3.9[50488]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:10:11 compute-0 sudo[50486]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:12 compute-0 sudo[50609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtmdrqissgjkdzwxdayfakhsjcwoarps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609811.3657374-229-156092072474139/AnsiballZ_copy.py'
Dec 13 07:10:12 compute-0 sudo[50609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:12 compute-0 python3.9[50611]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609811.3657374-229-156092072474139/.source _original_basename=.crx49vks follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:12 compute-0 sudo[50609]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:12 compute-0 sudo[50761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzpwyuoperpmrheipgnfnytjgkesfujt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609812.3502994-244-44976174416642/AnsiballZ_file.py'
Dec 13 07:10:12 compute-0 sudo[50761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:12 compute-0 python3.9[50763]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:12 compute-0 sudo[50761]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:13 compute-0 sudo[50913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvqjqrltxvkgrpmqeedurxyzwqsgnmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609812.8225563-252-148929636656845/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 13 07:10:13 compute-0 sudo[50913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:13 compute-0 python3.9[50915]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 13 07:10:13 compute-0 sudo[50913]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:13 compute-0 sudo[51065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgedchyyyxcrseodpcsefdqhmpdmnbnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609813.4540453-261-22261458911344/AnsiballZ_file.py'
Dec 13 07:10:13 compute-0 sudo[51065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:13 compute-0 python3.9[51067]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:13 compute-0 sudo[51065]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:14 compute-0 sudo[51217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjnrimuerdthwvddvfwwutozddayodol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609814.0192459-271-117280778774170/AnsiballZ_stat.py'
Dec 13 07:10:14 compute-0 sudo[51217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:14 compute-0 sudo[51217]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:14 compute-0 sudo[51340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oysbtwkqybebeijrtiiefjadxkaanjoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609814.0192459-271-117280778774170/AnsiballZ_copy.py'
Dec 13 07:10:14 compute-0 sudo[51340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:14 compute-0 sudo[51340]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:15 compute-0 sudo[51492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qybwuksmdmgrzopsylwgrzjpwpycgdnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609814.8791957-286-125201648816785/AnsiballZ_slurp.py'
Dec 13 07:10:15 compute-0 sudo[51492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:15 compute-0 python3.9[51494]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 13 07:10:15 compute-0 sudo[51492]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:16 compute-0 sudo[51667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-detpggqsmylszgewpwdlmzecfzkjpjux ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609815.5161934-295-131695455347850/async_wrapper.py j178991850972 300 /home/zuul/.ansible/tmp/ansible-tmp-1765609815.5161934-295-131695455347850/AnsiballZ_edpm_os_net_config.py _'
Dec 13 07:10:16 compute-0 sudo[51667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:16 compute-0 ansible-async_wrapper.py[51669]: Invoked with j178991850972 300 /home/zuul/.ansible/tmp/ansible-tmp-1765609815.5161934-295-131695455347850/AnsiballZ_edpm_os_net_config.py _
Dec 13 07:10:16 compute-0 ansible-async_wrapper.py[51672]: Starting module and watcher
Dec 13 07:10:16 compute-0 ansible-async_wrapper.py[51672]: Start watching 51673 (300)
Dec 13 07:10:16 compute-0 ansible-async_wrapper.py[51673]: Start module (51673)
Dec 13 07:10:16 compute-0 ansible-async_wrapper.py[51669]: Return async_wrapper task started.
Dec 13 07:10:16 compute-0 sudo[51667]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:16 compute-0 python3.9[51674]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 13 07:10:16 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 13 07:10:16 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 13 07:10:16 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 13 07:10:16 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 13 07:10:16 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.6547] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.6563] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7005] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7007] audit: op="connection-add" uuid="f94d2fdf-f006-4d53-85ba-35f4397ffee0" name="br-ex-br" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7019] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7020] audit: op="connection-add" uuid="9696c1a6-1d6b-4f09-9d0c-893ae374b298" name="br-ex-port" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7030] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7031] audit: op="connection-add" uuid="1d265d2c-addc-4642-bdde-876a43f7ecf2" name="eth1-port" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7041] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7042] audit: op="connection-add" uuid="14c5abd4-15d5-4d4c-8290-744fc8b1677a" name="vlan20-port" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7051] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7053] audit: op="connection-add" uuid="dd02dda4-d172-4158-a6e3-d73f6545a27c" name="vlan21-port" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7061] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7063] audit: op="connection-add" uuid="71f80fa1-f4b3-4c0a-87d1-e45d36aa0390" name="vlan22-port" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7072] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7073] audit: op="connection-add" uuid="cce860f3-f224-40fc-bb73-c33cf53e9d79" name="vlan23-port" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7090] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7104] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7105] audit: op="connection-add" uuid="a97420cc-2009-4b2a-854b-70c000ef7060" name="br-ex-if" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7126] audit: op="connection-update" uuid="45fd5f29-c067-5e65-8f11-dae4f04176a0" name="ci-private-network" args="connection.timestamp,connection.slave-type,connection.controller,connection.master,connection.port-type,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ovs-external-ids.data,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.never-default,ovs-interface.type" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7139] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7140] audit: op="connection-add" uuid="1326a875-3832-433f-9bbf-a54f44ef0c11" name="vlan20-if" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7152] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7153] audit: op="connection-add" uuid="088cd2e5-9f2d-45ab-ad1b-b2eddcd3cde9" name="vlan21-if" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7166] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7168] audit: op="connection-add" uuid="c7c62a70-d2ff-4219-a62b-2dbc76d3ae1d" name="vlan22-if" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7180] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7181] audit: op="connection-add" uuid="51ad8775-8f6d-4c27-b845-0a5eb300ab9a" name="vlan23-if" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7191] audit: op="connection-delete" uuid="4a989926-6152-3dd8-8a07-a3472614de2b" name="Wired connection 1" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7200] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7202] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7207] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7210] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f94d2fdf-f006-4d53-85ba-35f4397ffee0)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7211] audit: op="connection-activate" uuid="f94d2fdf-f006-4d53-85ba-35f4397ffee0" name="br-ex-br" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7212] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7213] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7217] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7220] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9696c1a6-1d6b-4f09-9d0c-893ae374b298)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7221] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7222] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7225] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7228] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (1d265d2c-addc-4642-bdde-876a43f7ecf2)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7229] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7230] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7234] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7237] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (14c5abd4-15d5-4d4c-8290-744fc8b1677a)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7238] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7239] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7242] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7246] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (dd02dda4-d172-4158-a6e3-d73f6545a27c)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7247] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7248] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7251] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7255] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (71f80fa1-f4b3-4c0a-87d1-e45d36aa0390)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7256] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7257] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7260] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7264] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (cce860f3-f224-40fc-bb73-c33cf53e9d79)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7265] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7266] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7268] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7272] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7273] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7275] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7278] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a97420cc-2009-4b2a-854b-70c000ef7060)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7279] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7281] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7283] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7283] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7285] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7293] device (eth1): disconnecting for new activation request.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7293] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7295] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7296] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7297] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7298] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7299] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7300] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7303] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1326a875-3832-433f-9bbf-a54f44ef0c11)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7303] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7305] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7306] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7306] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7308] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7308] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7310] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7312] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (088cd2e5-9f2d-45ab-ad1b-b2eddcd3cde9)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7313] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7314] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7315] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7316] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7317] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7318] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7319] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7322] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c7c62a70-d2ff-4219-a62b-2dbc76d3ae1d)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7322] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7324] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7325] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7326] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7327] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <warn>  [1765609817.7328] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7329] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7334] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (51ad8775-8f6d-4c27-b845-0a5eb300ab9a)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7334] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7337] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7338] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7339] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7340] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7351] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7353] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7355] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7357] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7362] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7364] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7366] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7368] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7369] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7372] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7374] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7376] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7377] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7388] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7391] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7393] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7394] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7397] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 systemd-udevd[51680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 07:10:17 compute-0 kernel: Timeout policy base is empty
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7405] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7407] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7408] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7410] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7413] dhcp4 (eth0): canceled DHCP transaction
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7413] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7413] dhcp4 (eth0): state changed no lease
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7414] dhcp6 (eth0): canceled DHCP transaction
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7414] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7414] dhcp6 (eth0): state changed no lease
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7418] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7425] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7432] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 13 07:10:17 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 07:10:17 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7533] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7537] dhcp4 (eth0): state changed new lease, address=192.168.25.195
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7545] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7566] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7621] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7624] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7627] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7630] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51675 uid=0 result="fail" reason="Device is not activated"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7634] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7637] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7641] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7647] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7651] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7652] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7653] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7655] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7656] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7657] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7662] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7671] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7673] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7676] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7678] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7681] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7683] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7686] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7689] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7691] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7694] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7697] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7699] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7703] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7716] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7721] device (eth1): state change: ip-check -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7723] device (eth1)[Open vSwitch Port]: detaching ovs interface eth1
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7724] device (eth1): released from controller device eth1
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7728] device (eth1): disconnecting for new activation request.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7729] audit: op="connection-activate" uuid="45fd5f29-c067-5e65-8f11-dae4f04176a0" name="ci-private-network" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7756] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7761] device (eth1): Activation: starting connection 'ci-private-network' (45fd5f29-c067-5e65-8f11-dae4f04176a0)
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7764] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 07:10:17 compute-0 kernel: br-ex: entered promiscuous mode
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7806] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7810] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7819] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7860] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7863] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7898] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7900] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7903] device (eth1): Activation: successful, device activated.
Dec 13 07:10:17 compute-0 kernel: vlan22: entered promiscuous mode
Dec 13 07:10:17 compute-0 systemd-udevd[51681]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7933] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7941] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.7994] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8002] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8030] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8031] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8032] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8037] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8042] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8046] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 kernel: vlan21: entered promiscuous mode
Dec 13 07:10:17 compute-0 kernel: vlan23: entered promiscuous mode
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8222] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 13 07:10:17 compute-0 kernel: vlan20: entered promiscuous mode
Dec 13 07:10:17 compute-0 systemd-udevd[51679]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8237] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 13 07:10:17 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8269] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8275] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8311] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8317] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8326] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8333] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8335] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8346] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8353] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8371] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8401] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8404] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 07:10:17 compute-0 NetworkManager[48896]: <info>  [1765609817.8411] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 07:10:18 compute-0 NetworkManager[48896]: <info>  [1765609818.9287] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.0517] checkpoint[0x55756995d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.0519] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.1794] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.1805] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.3505] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.4481] checkpoint[0x55756995da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.4484] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 sudo[52033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdriohpaqaidkcheplchyfzhulzfjct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609819.2872288-295-142708137304031/AnsiballZ_async_status.py'
Dec 13 07:10:19 compute-0 sudo[52033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.6653] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.6661] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 python3.9[52035]: ansible-ansible.legacy.async_status Invoked with jid=j178991850972.51669 mode=status _async_dir=/root/.ansible_async
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.8118] audit: op="networking-control" arg="global-dns-configuration" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.8130] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.8135] audit: op="networking-control" arg="global-dns-configuration" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 sudo[52033]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.8173] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.9415] checkpoint[0x55756995daf0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Dec 13 07:10:19 compute-0 NetworkManager[48896]: <info>  [1765609819.9419] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51675 uid=0 result="success"
Dec 13 07:10:19 compute-0 ansible-async_wrapper.py[51673]: Module complete (51673)
Dec 13 07:10:21 compute-0 ansible-async_wrapper.py[51672]: Done in kid B.
Dec 13 07:10:22 compute-0 sudo[52137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuhtvsystoiasvebmpvjrnqsxxaffhze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609819.2872288-295-142708137304031/AnsiballZ_async_status.py'
Dec 13 07:10:22 compute-0 sudo[52137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:23 compute-0 python3.9[52139]: ansible-ansible.legacy.async_status Invoked with jid=j178991850972.51669 mode=status _async_dir=/root/.ansible_async
Dec 13 07:10:23 compute-0 sudo[52137]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:23 compute-0 sudo[52237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxuvwlcmkfijpabrfonxrsorivpyutz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609819.2872288-295-142708137304031/AnsiballZ_async_status.py'
Dec 13 07:10:23 compute-0 sudo[52237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:23 compute-0 python3.9[52239]: ansible-ansible.legacy.async_status Invoked with jid=j178991850972.51669 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 07:10:23 compute-0 sudo[52237]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:23 compute-0 sudo[52389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufwrdrosybkiyrvhtrgxbsinenjicxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609823.6802783-322-144598896252252/AnsiballZ_stat.py'
Dec 13 07:10:23 compute-0 sudo[52389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:24 compute-0 python3.9[52391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:10:24 compute-0 sudo[52389]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:24 compute-0 sudo[52512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkzepaxqawbrnslcsiluldcouionayai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609823.6802783-322-144598896252252/AnsiballZ_copy.py'
Dec 13 07:10:24 compute-0 sudo[52512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:24 compute-0 python3.9[52514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609823.6802783-322-144598896252252/.source.returncode _original_basename=.i7e6mtbn follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:24 compute-0 sudo[52512]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:24 compute-0 sudo[52664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbspwhwvhypgogonazcvppsrszzlrjvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609824.6031902-338-70353546075863/AnsiballZ_stat.py'
Dec 13 07:10:24 compute-0 sudo[52664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:24 compute-0 python3.9[52666]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:10:24 compute-0 sudo[52664]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:25 compute-0 sudo[52787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxscrgmudmkarridnmmrajtpscpukxgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609824.6031902-338-70353546075863/AnsiballZ_copy.py'
Dec 13 07:10:25 compute-0 sudo[52787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:25 compute-0 python3.9[52789]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609824.6031902-338-70353546075863/.source.cfg _original_basename=.w6uebeag follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:25 compute-0 sudo[52787]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:25 compute-0 sudo[52939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flckngqeirjunlmgqeohmyrazelhcgjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609825.4824977-353-114370529632190/AnsiballZ_systemd.py'
Dec 13 07:10:25 compute-0 sudo[52939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:25 compute-0 python3.9[52941]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:10:25 compute-0 systemd[1]: Reloading Network Manager...
Dec 13 07:10:25 compute-0 NetworkManager[48896]: <info>  [1765609825.9778] audit: op="reload" arg="0" pid=52945 uid=0 result="success"
Dec 13 07:10:25 compute-0 NetworkManager[48896]: <info>  [1765609825.9783] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 13 07:10:25 compute-0 NetworkManager[48896]: <info>  [1765609825.9784] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 07:10:25 compute-0 systemd[1]: Reloaded Network Manager.
Dec 13 07:10:26 compute-0 sudo[52939]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:26 compute-0 sshd-session[44901]: Connection closed by 192.168.122.30 port 58640
Dec 13 07:10:26 compute-0 sshd-session[44898]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:10:26 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 07:10:26 compute-0 systemd[1]: session-9.scope: Consumed 38.647s CPU time.
Dec 13 07:10:26 compute-0 systemd-logind[745]: Session 9 logged out. Waiting for processes to exit.
Dec 13 07:10:26 compute-0 systemd-logind[745]: Removed session 9.
Dec 13 07:10:29 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 07:10:30 compute-0 sshd-session[52978]: Accepted publickey for zuul from 192.168.122.30 port 49662 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:10:30 compute-0 systemd-logind[745]: New session 10 of user zuul.
Dec 13 07:10:30 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 13 07:10:30 compute-0 sshd-session[52978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:10:31 compute-0 python3.9[53131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:10:32 compute-0 python3.9[53285]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:10:33 compute-0 python3.9[53479]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:10:33 compute-0 sshd-session[52981]: Connection closed by 192.168.122.30 port 49662
Dec 13 07:10:33 compute-0 sshd-session[52978]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:10:33 compute-0 systemd-logind[745]: Session 10 logged out. Waiting for processes to exit.
Dec 13 07:10:33 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 07:10:33 compute-0 systemd[1]: session-10.scope: Consumed 1.711s CPU time.
Dec 13 07:10:33 compute-0 systemd-logind[745]: Removed session 10.
Dec 13 07:10:36 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 07:10:39 compute-0 sshd-session[53507]: Accepted publickey for zuul from 192.168.122.30 port 52208 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:10:39 compute-0 systemd-logind[745]: New session 11 of user zuul.
Dec 13 07:10:39 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 13 07:10:39 compute-0 sshd-session[53507]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:10:39 compute-0 python3.9[53661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:10:40 compute-0 python3.9[53815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:10:41 compute-0 sudo[53969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tibuidqnqgurvnxdjxpesqwywqelbwpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609840.956585-40-25359880302679/AnsiballZ_setup.py'
Dec 13 07:10:41 compute-0 sudo[53969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:41 compute-0 python3.9[53971]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:10:41 compute-0 sudo[53969]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:41 compute-0 sudo[54053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edrnuzyspmqmbvyadqodjquijbykikoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609840.956585-40-25359880302679/AnsiballZ_dnf.py'
Dec 13 07:10:41 compute-0 sudo[54053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:42 compute-0 python3.9[54055]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:10:43 compute-0 sudo[54053]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:43 compute-0 sudo[54207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpvolnbpbjfnbsxwcunlpprtqeibifye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609843.2138073-52-125996526931534/AnsiballZ_setup.py'
Dec 13 07:10:43 compute-0 sudo[54207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:43 compute-0 python3.9[54209]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:10:43 compute-0 sudo[54207]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:44 compute-0 sudo[54402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqaefrcchsemnsivjcfdeuixsnhxits ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609844.0425072-63-10252664835211/AnsiballZ_file.py'
Dec 13 07:10:44 compute-0 sudo[54402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:44 compute-0 python3.9[54404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:44 compute-0 sudo[54402]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:44 compute-0 sudo[54554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exnmryoosvjybfprqvcawfgktcxqimgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609844.6696002-71-127593617036882/AnsiballZ_command.py'
Dec 13 07:10:44 compute-0 sudo[54554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:45 compute-0 python3.9[54556]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3730731190-merged.mount: Deactivated successfully.
Dec 13 07:10:45 compute-0 podman[54557]: 2025-12-13 07:10:45.184966755 +0000 UTC m=+0.029003722 system refresh
Dec 13 07:10:45 compute-0 sudo[54554]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:45 compute-0 sudo[54715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfpkokkztktksugrusbdfkdhoddkakcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609845.3247852-79-240408286586893/AnsiballZ_stat.py'
Dec 13 07:10:45 compute-0 sudo[54715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:45 compute-0 python3.9[54717]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:10:45 compute-0 sudo[54715]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:10:46 compute-0 sudo[54839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxiqiwjwxivrwxotqshnbbathfadpdbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609845.3247852-79-240408286586893/AnsiballZ_copy.py'
Dec 13 07:10:46 compute-0 sudo[54839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:46 compute-0 python3.9[54841]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609845.3247852-79-240408286586893/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9dbfc7db70a09a2b7e7975cbca18d4f65ab65e4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:10:46 compute-0 sudo[54839]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:46 compute-0 sudo[54991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avemsxusozivflwxijhvawcysichnfyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609846.4755156-94-6740131187915/AnsiballZ_stat.py'
Dec 13 07:10:46 compute-0 sudo[54991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:46 compute-0 python3.9[54993]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:10:46 compute-0 sudo[54991]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:47 compute-0 sudo[55114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swixvqkefojqcqagztriowyuxwcnexhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609846.4755156-94-6740131187915/AnsiballZ_copy.py'
Dec 13 07:10:47 compute-0 sudo[55114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:47 compute-0 python3.9[55116]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609846.4755156-94-6740131187915/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97c740afc5391c47ef8b0bfc44a8fae07d2d9f9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:10:47 compute-0 sudo[55114]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:47 compute-0 sudo[55266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnltzmgatdomqdxrbydrgrwbvuwyqtan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609847.4175296-110-206254333834094/AnsiballZ_ini_file.py'
Dec 13 07:10:47 compute-0 sudo[55266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:47 compute-0 python3.9[55268]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:10:47 compute-0 sudo[55266]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:48 compute-0 sudo[55418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktcoazjidjglhnxufzuconscuxystem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609847.9851675-110-37591881648900/AnsiballZ_ini_file.py'
Dec 13 07:10:48 compute-0 sudo[55418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:48 compute-0 python3.9[55420]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:10:48 compute-0 sudo[55418]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:48 compute-0 sudo[55570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrkbwkmpysuokhyvyndkaylkxbmktylf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609848.4351902-110-100879839388338/AnsiballZ_ini_file.py'
Dec 13 07:10:48 compute-0 sudo[55570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:48 compute-0 python3.9[55572]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:10:48 compute-0 sudo[55570]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:49 compute-0 sudo[55723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclcjmdypfgkyqiccsqivwqdchufjtcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609848.8941534-110-67161164510814/AnsiballZ_ini_file.py'
Dec 13 07:10:49 compute-0 sudo[55723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:49 compute-0 python3.9[55725]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:10:49 compute-0 sudo[55723]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:49 compute-0 sudo[55875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwgdyvzansdaccdinvznhbawlpomiqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609849.430654-141-226194080230797/AnsiballZ_dnf.py'
Dec 13 07:10:49 compute-0 sudo[55875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:49 compute-0 python3.9[55877]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:10:50 compute-0 sudo[55875]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:51 compute-0 sudo[56028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvyhzkhuhxmuwiodgwkknodwyjntdsjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609851.11056-152-119328588847797/AnsiballZ_setup.py'
Dec 13 07:10:51 compute-0 sudo[56028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:51 compute-0 python3.9[56030]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:10:51 compute-0 sudo[56028]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:51 compute-0 sudo[56182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujirguhlzbtrxcqxvslmgtudaipveapo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609851.705292-160-243456760006010/AnsiballZ_stat.py'
Dec 13 07:10:51 compute-0 sudo[56182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:52 compute-0 python3.9[56184]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:10:52 compute-0 sudo[56182]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:52 compute-0 sudo[56334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlpgomteswxxdgcxlhrbznsflpzhuhrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609852.2306814-169-201116682019464/AnsiballZ_stat.py'
Dec 13 07:10:52 compute-0 sudo[56334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:52 compute-0 python3.9[56336]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:10:52 compute-0 sudo[56334]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:52 compute-0 sudo[56486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ficlxdtjclwphbgvjfeylpjwehhqobdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609852.7710302-179-130175697063447/AnsiballZ_command.py'
Dec 13 07:10:52 compute-0 sudo[56486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:53 compute-0 python3.9[56488]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:10:53 compute-0 sudo[56486]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:53 compute-0 sudo[56639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzdbnwkkhkyeoqpcceztbgpptcxgadu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609853.3276072-189-82152597584840/AnsiballZ_service_facts.py'
Dec 13 07:10:53 compute-0 sudo[56639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:53 compute-0 python3.9[56641]: ansible-service_facts Invoked
Dec 13 07:10:53 compute-0 network[56658]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:10:53 compute-0 network[56659]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:10:53 compute-0 network[56660]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:10:55 compute-0 sudo[56639]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:56 compute-0 sudo[56943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxymyauycxfzyozevcpnrkujxobzeveu ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765609856.2170916-204-108755271290950/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765609856.2170916-204-108755271290950/args'
Dec 13 07:10:56 compute-0 sudo[56943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:56 compute-0 sudo[56943]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:56 compute-0 sudo[57110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivnkbetcxbhdjjnivsirowumnhaajbsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609856.671614-215-28029753183654/AnsiballZ_dnf.py'
Dec 13 07:10:56 compute-0 sudo[57110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:57 compute-0 python3.9[57112]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:10:58 compute-0 sudo[57110]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:58 compute-0 sudo[57263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdvrnpzsjcnbesemaherjqpvioqmgrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609858.2980719-228-260579249867503/AnsiballZ_package_facts.py'
Dec 13 07:10:58 compute-0 sudo[57263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:59 compute-0 python3.9[57265]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 07:10:59 compute-0 sudo[57263]: pam_unix(sudo:session): session closed for user root
Dec 13 07:10:59 compute-0 sudo[57415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdoipijqdwiulydcamqnrynmknjbazuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609859.4913623-238-118250819008665/AnsiballZ_stat.py'
Dec 13 07:10:59 compute-0 sudo[57415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:10:59 compute-0 python3.9[57417]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:10:59 compute-0 sudo[57415]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:00 compute-0 sudo[57540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mctcsxchatjuzfitfxrfhyocygsjqpve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609859.4913623-238-118250819008665/AnsiballZ_copy.py'
Dec 13 07:11:00 compute-0 sudo[57540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:00 compute-0 python3.9[57542]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609859.4913623-238-118250819008665/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:00 compute-0 sudo[57540]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:00 compute-0 sudo[57694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onzlwbkobqnkwjkndfnqonsqmgefzeys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609860.498351-253-14332203429383/AnsiballZ_stat.py'
Dec 13 07:11:00 compute-0 sudo[57694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:00 compute-0 python3.9[57696]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:00 compute-0 sudo[57694]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:01 compute-0 sudo[57819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrlbkeskwfemcdteoqhtfmnfukjkumrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609860.498351-253-14332203429383/AnsiballZ_copy.py'
Dec 13 07:11:01 compute-0 sudo[57819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:01 compute-0 python3.9[57821]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609860.498351-253-14332203429383/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:01 compute-0 sudo[57819]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:01 compute-0 sudo[57973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgiukasgkyszuqfiamuejudeszdrjceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609861.6488118-274-56875813329633/AnsiballZ_lineinfile.py'
Dec 13 07:11:01 compute-0 sudo[57973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:02 compute-0 python3.9[57975]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:02 compute-0 sudo[57973]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:02 compute-0 sudo[58127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgybscueekfwwryuobkvlrfjbqjvzopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609862.5407376-289-196132339585690/AnsiballZ_setup.py'
Dec 13 07:11:02 compute-0 sudo[58127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:02 compute-0 python3.9[58129]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:11:03 compute-0 sudo[58127]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:03 compute-0 sudo[58211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfngyhqfkdogxaeqyhkicezewhpxdtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609862.5407376-289-196132339585690/AnsiballZ_systemd.py'
Dec 13 07:11:03 compute-0 sudo[58211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:03 compute-0 python3.9[58213]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:11:03 compute-0 sudo[58211]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:04 compute-0 sudo[58365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmnfbmnhltbnkdfsodcaxkbvjubzjdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609864.2810123-305-86681926085237/AnsiballZ_setup.py'
Dec 13 07:11:04 compute-0 sudo[58365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:04 compute-0 python3.9[58367]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:11:05 compute-0 sudo[58365]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:05 compute-0 sudo[58449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoousyytyghxbdsyjwmzohbxeupndibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609864.2810123-305-86681926085237/AnsiballZ_systemd.py'
Dec 13 07:11:05 compute-0 sudo[58449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:05 compute-0 python3.9[58451]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:11:05 compute-0 chronyd[754]: chronyd exiting
Dec 13 07:11:05 compute-0 systemd[1]: Stopping NTP client/server...
Dec 13 07:11:05 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 13 07:11:05 compute-0 systemd[1]: Stopped NTP client/server.
Dec 13 07:11:05 compute-0 systemd[1]: Starting NTP client/server...
Dec 13 07:11:05 compute-0 chronyd[58459]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 07:11:05 compute-0 chronyd[58459]: Frequency -4.566 +/- 0.327 ppm read from /var/lib/chrony/drift
Dec 13 07:11:05 compute-0 chronyd[58459]: Loaded seccomp filter (level 2)
Dec 13 07:11:05 compute-0 systemd[1]: Started NTP client/server.
Dec 13 07:11:05 compute-0 sudo[58449]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:05 compute-0 sshd-session[53510]: Connection closed by 192.168.122.30 port 52208
Dec 13 07:11:05 compute-0 sshd-session[53507]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:11:05 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 07:11:05 compute-0 systemd[1]: session-11.scope: Consumed 18.888s CPU time.
Dec 13 07:11:05 compute-0 systemd-logind[745]: Session 11 logged out. Waiting for processes to exit.
Dec 13 07:11:05 compute-0 systemd-logind[745]: Removed session 11.
Dec 13 07:11:11 compute-0 sshd-session[58485]: Accepted publickey for zuul from 192.168.122.30 port 47936 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:11:11 compute-0 systemd-logind[745]: New session 12 of user zuul.
Dec 13 07:11:11 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 13 07:11:11 compute-0 sshd-session[58485]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:11:11 compute-0 sudo[58638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnrywqqbxcscwlmtyvlcnhkzdbgdpryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609871.4163039-22-157953775789655/AnsiballZ_file.py'
Dec 13 07:11:11 compute-0 sudo[58638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:12 compute-0 python3.9[58640]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:12 compute-0 sudo[58638]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:12 compute-0 sudo[58790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amvukjwoxcsacjvxaafuybhavkwxwhje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609872.1388798-34-180363317541784/AnsiballZ_stat.py'
Dec 13 07:11:12 compute-0 sudo[58790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:12 compute-0 python3.9[58792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:12 compute-0 sudo[58790]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:12 compute-0 sudo[58913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrhwxoyqoyrehkbufeedlqirjzuwygko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609872.1388798-34-180363317541784/AnsiballZ_copy.py'
Dec 13 07:11:12 compute-0 sudo[58913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:13 compute-0 python3.9[58915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609872.1388798-34-180363317541784/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:13 compute-0 sudo[58913]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:13 compute-0 sshd-session[58488]: Connection closed by 192.168.122.30 port 47936
Dec 13 07:11:13 compute-0 sshd-session[58485]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:11:13 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 07:11:13 compute-0 systemd[1]: session-12.scope: Consumed 1.156s CPU time.
Dec 13 07:11:13 compute-0 systemd-logind[745]: Session 12 logged out. Waiting for processes to exit.
Dec 13 07:11:13 compute-0 systemd-logind[745]: Removed session 12.
Dec 13 07:11:18 compute-0 sshd-session[58940]: Accepted publickey for zuul from 192.168.122.30 port 42100 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:11:18 compute-0 systemd-logind[745]: New session 13 of user zuul.
Dec 13 07:11:18 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 13 07:11:18 compute-0 sshd-session[58940]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:11:19 compute-0 python3.9[59093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:11:19 compute-0 sudo[59247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyulgkkftutyshsjiozvhhpdshogxvyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609879.5797005-33-196789889404082/AnsiballZ_file.py'
Dec 13 07:11:19 compute-0 sudo[59247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:20 compute-0 python3.9[59249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:20 compute-0 sudo[59247]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:20 compute-0 sudo[59422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkolzfovgvkkwflmobrghrlwuvgangq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609880.175237-41-100752618407180/AnsiballZ_stat.py'
Dec 13 07:11:20 compute-0 sudo[59422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:20 compute-0 python3.9[59424]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:20 compute-0 sudo[59422]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:21 compute-0 sudo[59545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncewinjeolnnvsyeayxxoxyasgapdpzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609880.175237-41-100752618407180/AnsiballZ_copy.py'
Dec 13 07:11:21 compute-0 sudo[59545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:21 compute-0 python3.9[59547]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765609880.175237-41-100752618407180/.source.json _original_basename=.iwawi7m5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:21 compute-0 sudo[59545]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:21 compute-0 sudo[59697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwdedznvqyleokdycaskydcfibskplu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609881.5705988-64-14657562343924/AnsiballZ_stat.py'
Dec 13 07:11:21 compute-0 sudo[59697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:21 compute-0 python3.9[59699]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:21 compute-0 sudo[59697]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:22 compute-0 sudo[59820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmzdmjpanuebdvmjwzoitbxfnknbenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609881.5705988-64-14657562343924/AnsiballZ_copy.py'
Dec 13 07:11:22 compute-0 sudo[59820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:22 compute-0 python3.9[59822]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609881.5705988-64-14657562343924/.source _original_basename=.08btlcmp follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:22 compute-0 sudo[59820]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:22 compute-0 sudo[59972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onawvphqkmouhgazhaonbdokhleszhdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609882.455762-80-242561414179616/AnsiballZ_file.py'
Dec 13 07:11:22 compute-0 sudo[59972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:22 compute-0 python3.9[59974]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:11:22 compute-0 sudo[59972]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:23 compute-0 sudo[60124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulzbjdvuglwufczokwuciiqpoeiyxps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609882.9051979-88-231211034990961/AnsiballZ_stat.py'
Dec 13 07:11:23 compute-0 sudo[60124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:23 compute-0 python3.9[60126]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:23 compute-0 sudo[60124]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:23 compute-0 sudo[60247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcaekpnbmeeaefmwezdgrrquejvkgjqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609882.9051979-88-231211034990961/AnsiballZ_copy.py'
Dec 13 07:11:23 compute-0 sudo[60247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:23 compute-0 python3.9[60249]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609882.9051979-88-231211034990961/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:11:23 compute-0 sudo[60247]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:23 compute-0 sudo[60399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbptjhjdmssmecragawsftpuhvlmlbvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609883.7394176-88-171973322814779/AnsiballZ_stat.py'
Dec 13 07:11:23 compute-0 sudo[60399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:24 compute-0 python3.9[60401]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:24 compute-0 sudo[60399]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:24 compute-0 sudo[60522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyhzmwofgklgwmjikdaystvdddvlqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609883.7394176-88-171973322814779/AnsiballZ_copy.py'
Dec 13 07:11:24 compute-0 sudo[60522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:24 compute-0 python3.9[60524]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765609883.7394176-88-171973322814779/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:11:24 compute-0 sudo[60522]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:24 compute-0 sudo[60674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntpsdnthlxqtdppyezbbsnxhagvuvxzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609884.6137176-117-96752039571845/AnsiballZ_file.py'
Dec 13 07:11:24 compute-0 sudo[60674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:25 compute-0 python3.9[60676]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:25 compute-0 sudo[60674]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:25 compute-0 sudo[60826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshwqwtfnfupmyrqzapjjrkbfsezcgcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609885.1973333-125-221929501580681/AnsiballZ_stat.py'
Dec 13 07:11:25 compute-0 sudo[60826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:25 compute-0 python3.9[60828]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:25 compute-0 sudo[60826]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:25 compute-0 sudo[60949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmngvayqweoxnmuwdmzmldnqlykxtjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609885.1973333-125-221929501580681/AnsiballZ_copy.py'
Dec 13 07:11:25 compute-0 sudo[60949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:25 compute-0 python3.9[60951]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609885.1973333-125-221929501580681/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:25 compute-0 sudo[60949]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:26 compute-0 sudo[61101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztoipfmqidjepvmiutcjsgxlvvdzbxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609886.0176003-140-273033734956672/AnsiballZ_stat.py'
Dec 13 07:11:26 compute-0 sudo[61101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:26 compute-0 python3.9[61103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:26 compute-0 sudo[61101]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:26 compute-0 sudo[61224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzlmvmhsncbyifqgcxlwsywbwooklmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609886.0176003-140-273033734956672/AnsiballZ_copy.py'
Dec 13 07:11:26 compute-0 sudo[61224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:26 compute-0 python3.9[61226]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609886.0176003-140-273033734956672/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:26 compute-0 sudo[61224]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:27 compute-0 sudo[61376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrstlhrrozmekkraphkyrbykxyduorkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609886.8673067-155-203586382000792/AnsiballZ_systemd.py'
Dec 13 07:11:27 compute-0 sudo[61376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:27 compute-0 python3.9[61378]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:11:27 compute-0 systemd[1]: Reloading.
Dec 13 07:11:27 compute-0 systemd-rc-local-generator[61401]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:11:27 compute-0 systemd-sysv-generator[61404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:11:27 compute-0 systemd[1]: Reloading.
Dec 13 07:11:27 compute-0 systemd-sysv-generator[61438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:11:27 compute-0 systemd-rc-local-generator[61435]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:11:27 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 13 07:11:27 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 13 07:11:27 compute-0 sudo[61376]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:28 compute-0 sudo[61603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxeuxyjkikmksxopraqxmxpyoxzgadtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609888.0630047-163-181045583486872/AnsiballZ_stat.py'
Dec 13 07:11:28 compute-0 sudo[61603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:28 compute-0 python3.9[61605]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:28 compute-0 sudo[61603]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:28 compute-0 sudo[61726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtjhmvogzfslegmioscavkbbwtpovse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609888.0630047-163-181045583486872/AnsiballZ_copy.py'
Dec 13 07:11:28 compute-0 sudo[61726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:28 compute-0 python3.9[61728]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609888.0630047-163-181045583486872/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:28 compute-0 sudo[61726]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:29 compute-0 sudo[61878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vudiwvvryanlescprxcmpentlnthlibh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609888.9470243-178-138205182362353/AnsiballZ_stat.py'
Dec 13 07:11:29 compute-0 sudo[61878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:29 compute-0 python3.9[61880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:29 compute-0 sudo[61878]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:29 compute-0 sudo[62001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbsjcugtnpjibsftmzytrokrptetfeov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609888.9470243-178-138205182362353/AnsiballZ_copy.py'
Dec 13 07:11:29 compute-0 sudo[62001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:29 compute-0 python3.9[62003]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609888.9470243-178-138205182362353/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:29 compute-0 sudo[62001]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:30 compute-0 sudo[62153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouhyiwwgnfwbybnoujsjjxdgrwiqhgpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609889.8739405-193-73488123760299/AnsiballZ_systemd.py'
Dec 13 07:11:30 compute-0 sudo[62153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:30 compute-0 python3.9[62155]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:11:30 compute-0 systemd[1]: Reloading.
Dec 13 07:11:30 compute-0 systemd-rc-local-generator[62183]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:11:30 compute-0 systemd-sysv-generator[62187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:11:30 compute-0 systemd[1]: Reloading.
Dec 13 07:11:30 compute-0 systemd-rc-local-generator[62212]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:11:30 compute-0 systemd-sysv-generator[62216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:11:30 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 07:11:30 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 07:11:30 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 07:11:30 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 07:11:30 compute-0 sudo[62153]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:31 compute-0 python3.9[62382]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:11:31 compute-0 network[62399]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:11:31 compute-0 network[62400]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:11:31 compute-0 network[62401]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:11:33 compute-0 sudo[62661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igymetgbomrqzomgtonjsurbwpdkbyvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609893.173512-209-159530941375891/AnsiballZ_systemd.py'
Dec 13 07:11:33 compute-0 sudo[62661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:33 compute-0 python3.9[62663]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:11:33 compute-0 systemd[1]: Reloading.
Dec 13 07:11:33 compute-0 systemd-rc-local-generator[62686]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:11:33 compute-0 systemd-sysv-generator[62689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:11:33 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 13 07:11:34 compute-0 iptables.init[62703]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 13 07:11:34 compute-0 iptables.init[62703]: iptables: Flushing firewall rules: [  OK  ]
Dec 13 07:11:34 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 13 07:11:34 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 13 07:11:34 compute-0 sudo[62661]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:34 compute-0 sudo[62897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robxbfvjbkcxoxgvtjxllhblkssvevcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609894.2222507-209-178828806734567/AnsiballZ_systemd.py'
Dec 13 07:11:34 compute-0 sudo[62897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:34 compute-0 python3.9[62899]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:11:34 compute-0 sudo[62897]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:35 compute-0 sudo[63051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxeilvjsykcrkexiiyrfywpmaxrnova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609894.8719676-225-32168364435966/AnsiballZ_systemd.py'
Dec 13 07:11:35 compute-0 sudo[63051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:35 compute-0 python3.9[63053]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:11:35 compute-0 systemd[1]: Reloading.
Dec 13 07:11:35 compute-0 systemd-sysv-generator[63082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:11:35 compute-0 systemd-rc-local-generator[63079]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:11:35 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 13 07:11:35 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 13 07:11:35 compute-0 sudo[63051]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:36 compute-0 sudo[63242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqpeigsoirmrqnpicwzhmwvznewzkxwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609895.7054315-233-87767368111325/AnsiballZ_command.py'
Dec 13 07:11:36 compute-0 sudo[63242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:36 compute-0 python3.9[63244]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:11:36 compute-0 sudo[63242]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:36 compute-0 sudo[63395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvyqoajgdqimxviacqcrxitorqhobht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609896.5048082-247-269937766668422/AnsiballZ_stat.py'
Dec 13 07:11:36 compute-0 sudo[63395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:36 compute-0 python3.9[63397]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:36 compute-0 sudo[63395]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:37 compute-0 sudo[63520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwdcwnupulhslnepqrdfdlfhspkvnfub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609896.5048082-247-269937766668422/AnsiballZ_copy.py'
Dec 13 07:11:37 compute-0 sudo[63520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:37 compute-0 python3.9[63522]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609896.5048082-247-269937766668422/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:37 compute-0 sudo[63520]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:37 compute-0 sudo[63673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihutaowfmgwhxkxlwwpchadqfjwmirfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609897.4263856-262-257744343468140/AnsiballZ_systemd.py'
Dec 13 07:11:37 compute-0 sudo[63673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:37 compute-0 python3.9[63675]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:11:37 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 13 07:11:37 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 13 07:11:37 compute-0 sshd[963]: Received SIGHUP; restarting.
Dec 13 07:11:37 compute-0 sshd[963]: Server listening on 0.0.0.0 port 22.
Dec 13 07:11:37 compute-0 sshd[963]: Server listening on :: port 22.
Dec 13 07:11:37 compute-0 sudo[63673]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:38 compute-0 sudo[63829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjuicjlmmajwealmivrmbxkaeskeujc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609898.034633-270-219108512561018/AnsiballZ_file.py'
Dec 13 07:11:38 compute-0 sudo[63829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:38 compute-0 python3.9[63831]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:38 compute-0 sudo[63829]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:38 compute-0 sudo[63981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brlikzgshvcthntctfhapoerejeesydo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609898.5001786-278-54989060502867/AnsiballZ_stat.py'
Dec 13 07:11:38 compute-0 sudo[63981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:38 compute-0 python3.9[63983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:38 compute-0 sudo[63981]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:39 compute-0 sudo[64104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczldmkctunbfsdicukjqcwjmzxryueg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609898.5001786-278-54989060502867/AnsiballZ_copy.py'
Dec 13 07:11:39 compute-0 sudo[64104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:39 compute-0 python3.9[64106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609898.5001786-278-54989060502867/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:39 compute-0 sudo[64104]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:39 compute-0 sudo[64256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjfllmqwdasydpjgpcutxojrmehtaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609899.4761848-296-212265968673966/AnsiballZ_timezone.py'
Dec 13 07:11:39 compute-0 sudo[64256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:39 compute-0 python3.9[64258]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 07:11:40 compute-0 systemd[1]: Starting Time & Date Service...
Dec 13 07:11:40 compute-0 systemd[1]: Started Time & Date Service.
Dec 13 07:11:40 compute-0 sudo[64256]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:40 compute-0 sudo[64412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocbhblrnanjbjxqkcmqdfgyjbcrxkbta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609900.245813-305-256014862196643/AnsiballZ_file.py'
Dec 13 07:11:40 compute-0 sudo[64412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:40 compute-0 python3.9[64414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:40 compute-0 sudo[64412]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:40 compute-0 sudo[64564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpgmumpxaailwrimirfunjkfmefhixci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609900.7138972-313-44262870729921/AnsiballZ_stat.py'
Dec 13 07:11:40 compute-0 sudo[64564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:41 compute-0 python3.9[64566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:41 compute-0 sudo[64564]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:41 compute-0 sudo[64687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdagsojyculdkrwdkrjbqjdxzkyjbnur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609900.7138972-313-44262870729921/AnsiballZ_copy.py'
Dec 13 07:11:41 compute-0 sudo[64687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:41 compute-0 python3.9[64689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609900.7138972-313-44262870729921/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:41 compute-0 sudo[64687]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:41 compute-0 sudo[64839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlvvjffiqkdidsygdpavdojogqyynzak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609901.601812-328-244938739988960/AnsiballZ_stat.py'
Dec 13 07:11:41 compute-0 sudo[64839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:41 compute-0 python3.9[64841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:41 compute-0 sudo[64839]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:42 compute-0 sudo[64962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klsgwbwghvovmxfoizioyddbnvydzhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609901.601812-328-244938739988960/AnsiballZ_copy.py'
Dec 13 07:11:42 compute-0 sudo[64962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:42 compute-0 python3.9[64964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609901.601812-328-244938739988960/.source.yaml _original_basename=.n9196qbd follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:42 compute-0 sudo[64962]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:42 compute-0 sudo[65114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrducdgvvulsmxmslgfotkszkfflbqtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609902.4679198-343-57047322663997/AnsiballZ_stat.py'
Dec 13 07:11:42 compute-0 sudo[65114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:42 compute-0 python3.9[65116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:42 compute-0 sudo[65114]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:43 compute-0 sudo[65237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rniylbzliovioblctcblbezbnuaqkbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609902.4679198-343-57047322663997/AnsiballZ_copy.py'
Dec 13 07:11:43 compute-0 sudo[65237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:43 compute-0 python3.9[65239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609902.4679198-343-57047322663997/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:43 compute-0 sudo[65237]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:43 compute-0 sudo[65389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piwwffkqcmxwrmwusbqlhylojaldscxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609903.3233032-358-153210112862511/AnsiballZ_command.py'
Dec 13 07:11:43 compute-0 sudo[65389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:43 compute-0 python3.9[65391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:11:43 compute-0 sudo[65389]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:44 compute-0 sudo[65542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjihgnvpnmbgljvdtmxomuwxggmptqnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609903.8163226-366-13086372163262/AnsiballZ_command.py'
Dec 13 07:11:44 compute-0 sudo[65542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:44 compute-0 python3.9[65544]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:11:44 compute-0 sudo[65542]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:44 compute-0 sudo[65695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sevlhiduxysmcwvpvmbrxzaxwnuveutn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765609904.2911923-374-46569848602980/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 07:11:44 compute-0 sudo[65695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:44 compute-0 python3[65697]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 07:11:44 compute-0 sudo[65695]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:45 compute-0 sudo[65847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjkztrvcftfrkxadtdodmpqjexyuokbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609904.8921113-382-2629788672146/AnsiballZ_stat.py'
Dec 13 07:11:45 compute-0 sudo[65847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:45 compute-0 python3.9[65849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:45 compute-0 sudo[65847]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:45 compute-0 sudo[65970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadhxlguokwxsubwastldjpjuqfafvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609904.8921113-382-2629788672146/AnsiballZ_copy.py'
Dec 13 07:11:45 compute-0 sudo[65970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:45 compute-0 python3.9[65972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609904.8921113-382-2629788672146/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:45 compute-0 sudo[65970]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:45 compute-0 sudo[66122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muemxytgpupzwvfxzjnppaatdwjefzuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609905.7352118-397-240507926422713/AnsiballZ_stat.py'
Dec 13 07:11:45 compute-0 sudo[66122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:46 compute-0 python3.9[66124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:46 compute-0 sudo[66122]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:46 compute-0 sudo[66245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywqetcmyrhfcrgncvkfxylmspzarbbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609905.7352118-397-240507926422713/AnsiballZ_copy.py'
Dec 13 07:11:46 compute-0 sudo[66245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:46 compute-0 python3.9[66247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609905.7352118-397-240507926422713/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:46 compute-0 sudo[66245]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:46 compute-0 sudo[66397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpdpfqslmsjowkrpdwxexlucfswuogcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609906.6095788-412-218695808680096/AnsiballZ_stat.py'
Dec 13 07:11:46 compute-0 sudo[66397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:47 compute-0 python3.9[66399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:47 compute-0 sudo[66397]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:47 compute-0 sudo[66520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwiawznxwsjiyidxehwjfbztuhknzeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609906.6095788-412-218695808680096/AnsiballZ_copy.py'
Dec 13 07:11:47 compute-0 sudo[66520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:47 compute-0 python3.9[66522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609906.6095788-412-218695808680096/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:47 compute-0 sudo[66520]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:47 compute-0 sudo[66672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcpjifjdsqqopstonznnvjaczenjjbhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609907.6108472-427-19212125903793/AnsiballZ_stat.py'
Dec 13 07:11:47 compute-0 sudo[66672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:47 compute-0 python3.9[66674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:47 compute-0 sudo[66672]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:48 compute-0 sudo[66795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppqjtjkujdnnsrkkhuttguavmtigvsce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609907.6108472-427-19212125903793/AnsiballZ_copy.py'
Dec 13 07:11:48 compute-0 sudo[66795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:48 compute-0 python3.9[66797]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609907.6108472-427-19212125903793/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:48 compute-0 sudo[66795]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:48 compute-0 sudo[66947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmbjgeielqeqqajppjhsiimeuhnjejc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609908.4783607-442-26662363361057/AnsiballZ_stat.py'
Dec 13 07:11:48 compute-0 sudo[66947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:48 compute-0 python3.9[66949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:11:48 compute-0 sudo[66947]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:49 compute-0 sudo[67070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwykfbucblkmfwvjleokxkucsgxxzwyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609908.4783607-442-26662363361057/AnsiballZ_copy.py'
Dec 13 07:11:49 compute-0 sudo[67070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:49 compute-0 python3.9[67072]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765609908.4783607-442-26662363361057/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:49 compute-0 sudo[67070]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:49 compute-0 sudo[67222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjcnadntutvabfgkxcstvcqdkehtoaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609909.4316213-457-83174524579249/AnsiballZ_file.py'
Dec 13 07:11:49 compute-0 sudo[67222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:49 compute-0 python3.9[67224]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:49 compute-0 sudo[67222]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:50 compute-0 sudo[67374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isfmmyqfeqvgmeslfqdncctyvzqjboxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609909.8940966-465-27717141034224/AnsiballZ_command.py'
Dec 13 07:11:50 compute-0 sudo[67374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:50 compute-0 python3.9[67376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:11:50 compute-0 sudo[67374]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:50 compute-0 sudo[67533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuhdpzgrmezohusghzkghswnurgmvket ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609910.4000208-473-200411842208968/AnsiballZ_blockinfile.py'
Dec 13 07:11:50 compute-0 sudo[67533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:50 compute-0 python3.9[67535]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:50 compute-0 sudo[67533]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:51 compute-0 sudo[67686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnaumytrtneskpzgjwnjirrcssvlwlxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609911.1541114-482-59634582068174/AnsiballZ_file.py'
Dec 13 07:11:51 compute-0 sudo[67686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:51 compute-0 python3.9[67688]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:51 compute-0 sudo[67686]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:51 compute-0 sudo[67838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtichooyaxqsywrriqrielqmhpcawhvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609911.5752583-482-10181089645660/AnsiballZ_file.py'
Dec 13 07:11:51 compute-0 sudo[67838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:51 compute-0 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:11:51 compute-0 sudo[67838]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:52 compute-0 sudo[67990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzjkoaywmmhbtoqlwkoclvdhowomkliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609912.02996-497-171798741937506/AnsiballZ_mount.py'
Dec 13 07:11:52 compute-0 sudo[67990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:52 compute-0 python3.9[67992]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 07:11:52 compute-0 sudo[67990]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:52 compute-0 sudo[68143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvaahpsxpyyspvwvyfvjoycgrriqgffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609912.656384-497-120114415947046/AnsiballZ_mount.py'
Dec 13 07:11:52 compute-0 sudo[68143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:53 compute-0 python3.9[68145]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 07:11:53 compute-0 sudo[68143]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:53 compute-0 sshd-session[58943]: Connection closed by 192.168.122.30 port 42100
Dec 13 07:11:53 compute-0 sshd-session[58940]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:11:53 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 07:11:53 compute-0 systemd[1]: session-13.scope: Consumed 24.831s CPU time.
Dec 13 07:11:53 compute-0 systemd-logind[745]: Session 13 logged out. Waiting for processes to exit.
Dec 13 07:11:53 compute-0 systemd-logind[745]: Removed session 13.
Dec 13 07:11:58 compute-0 sshd-session[68172]: Accepted publickey for zuul from 192.168.122.30 port 38672 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:11:58 compute-0 systemd-logind[745]: New session 14 of user zuul.
Dec 13 07:11:58 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 13 07:11:58 compute-0 sshd-session[68172]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:11:59 compute-0 sudo[68325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsgcjvzacskvkffngweqfrslabbhvygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609918.7120135-16-114750184340583/AnsiballZ_tempfile.py'
Dec 13 07:11:59 compute-0 sudo[68325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:59 compute-0 python3.9[68327]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 07:11:59 compute-0 sudo[68325]: pam_unix(sudo:session): session closed for user root
Dec 13 07:11:59 compute-0 sudo[68477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuhenrqexnlzxyfzscrcmjbcmvhrizxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609919.3041718-28-228749371247053/AnsiballZ_stat.py'
Dec 13 07:11:59 compute-0 sudo[68477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:11:59 compute-0 python3.9[68479]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:11:59 compute-0 sudo[68477]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:00 compute-0 sudo[68629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atkicwhxommhosuitygthxskuucrhnde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609919.895812-38-60884553660078/AnsiballZ_setup.py'
Dec 13 07:12:00 compute-0 sudo[68629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:00 compute-0 python3.9[68631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:12:00 compute-0 sudo[68629]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:01 compute-0 sudo[68781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzkdonginblkbvpctyvkhusuwkaxblqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609920.7369788-47-98131673071700/AnsiballZ_blockinfile.py'
Dec 13 07:12:01 compute-0 sudo[68781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:01 compute-0 python3.9[68783]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCekpfjOZMQHu4kGkMmbnPcCtz1ykBu18rwwghFZ6JdZNeLGT0geVZzeGTxx67o32Xucl5rndeaEtZvZfxTXM1W/3Z9ig0x1tTtqK2lTLjxcw4+AxChtq8Mt1LZKUi2MHVUdDkB8UwKvPPC6k5NFQRBu1jsX63zDiUCudXQlFm49OLA8BZh7VuZYlpOMnuiPC9cWsSAehEH4hmIdqlyl7xhfBn/4IId10yPH4Bev4qk4z212G730uw0ldn9RfPP2Batr31zKwOCUveVL5V48yK6VIj2O4uztbh6yagWlbqPwmUoYdvokyMVmONCStsc8BDSSaTmH7gv6cm1tfpfpKJlBo25kpuVocNQaaZB8/x71weojzujWfYBPfwbGARRkq9lgjdmyLJot9XdtcDkAKNeE6nzDo29nj1SpYzDYu2OrwI8RN9TLEQyXyUi80L4ELrI2WrVf5NwIvfG0ZKHurHxEDYcJKris+z3lCdPHRbw/D0HAhFZ6YnnViCeqLe+XL0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDlhQSLisbnaeA/5eqQ07vXPLvOWH+wLodInwcPHjCbq
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL+1SrJ/t+tkNcFtDd1R0f0/5owYzeRM7hR2TrpSEQtZk5y2BWR+htC7NOo7cYghMztLnyJaOIsNSp9NjO5UEBE=
                                             create=True mode=0644 path=/tmp/ansible.up0z_r17 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:01 compute-0 sudo[68781]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:01 compute-0 sudo[68933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weidcnvtivzgpcszhrztjipkurblknwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609921.3178885-55-120340809073644/AnsiballZ_command.py'
Dec 13 07:12:01 compute-0 sudo[68933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:01 compute-0 python3.9[68935]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.up0z_r17' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:01 compute-0 sudo[68933]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:02 compute-0 sudo[69087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikdukpbcmqijoyyewvndlklobddtzyao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609922.0192482-63-256816011529656/AnsiballZ_file.py'
Dec 13 07:12:02 compute-0 sudo[69087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:02 compute-0 python3.9[69089]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.up0z_r17 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:02 compute-0 sudo[69087]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:02 compute-0 sshd-session[68175]: Connection closed by 192.168.122.30 port 38672
Dec 13 07:12:02 compute-0 sshd-session[68172]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:12:02 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 07:12:02 compute-0 systemd[1]: session-14.scope: Consumed 2.380s CPU time.
Dec 13 07:12:02 compute-0 systemd-logind[745]: Session 14 logged out. Waiting for processes to exit.
Dec 13 07:12:02 compute-0 systemd-logind[745]: Removed session 14.
Dec 13 07:12:07 compute-0 sshd-session[69114]: Accepted publickey for zuul from 192.168.122.30 port 53120 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:12:07 compute-0 systemd-logind[745]: New session 15 of user zuul.
Dec 13 07:12:07 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 13 07:12:07 compute-0 sshd-session[69114]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:12:08 compute-0 python3.9[69267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:12:09 compute-0 sudo[69421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhbpbmvovgqthmghluuopgccgshomij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609928.73138-32-204646993018700/AnsiballZ_systemd.py'
Dec 13 07:12:09 compute-0 sudo[69421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:09 compute-0 python3.9[69423]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 07:12:09 compute-0 sudo[69421]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:09 compute-0 sudo[69575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkltphoccqfltmhqnjxjnogbggscaqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609929.556556-40-125687939426868/AnsiballZ_systemd.py'
Dec 13 07:12:09 compute-0 sudo[69575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:09 compute-0 python3.9[69577]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:12:10 compute-0 sudo[69575]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:10 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 07:12:10 compute-0 sudo[69730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glalbmlwwrzzczmseqcsfecqkfordgwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609930.1557493-49-224207882451170/AnsiballZ_command.py'
Dec 13 07:12:10 compute-0 sudo[69730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:10 compute-0 python3.9[69732]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:10 compute-0 sudo[69730]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:11 compute-0 sudo[69883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdgofobzbtyqsrsexaoctcsyotxxfib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609930.7603483-57-203684883387755/AnsiballZ_stat.py'
Dec 13 07:12:11 compute-0 sudo[69883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:11 compute-0 python3.9[69885]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:12:11 compute-0 sudo[69883]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:11 compute-0 sudo[70037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udncrqvmdlfxemfdcgwfltlsocrgesla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609931.3325348-65-155504751158030/AnsiballZ_command.py'
Dec 13 07:12:11 compute-0 sudo[70037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:11 compute-0 python3.9[70039]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:11 compute-0 sudo[70037]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:12 compute-0 sudo[70192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxihqvezgjwbjlkpgidhmupgbpoghhca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609931.812812-73-255927221787863/AnsiballZ_file.py'
Dec 13 07:12:12 compute-0 sudo[70192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:12 compute-0 python3.9[70194]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:12 compute-0 sudo[70192]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:12 compute-0 sshd-session[69117]: Connection closed by 192.168.122.30 port 53120
Dec 13 07:12:12 compute-0 sshd-session[69114]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:12:12 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 07:12:12 compute-0 systemd[1]: session-15.scope: Consumed 3.206s CPU time.
Dec 13 07:12:12 compute-0 systemd-logind[745]: Session 15 logged out. Waiting for processes to exit.
Dec 13 07:12:12 compute-0 systemd-logind[745]: Removed session 15.
Dec 13 07:12:17 compute-0 sshd-session[70219]: Accepted publickey for zuul from 192.168.122.30 port 52348 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:12:17 compute-0 systemd-logind[745]: New session 16 of user zuul.
Dec 13 07:12:17 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 13 07:12:17 compute-0 sshd-session[70219]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:12:18 compute-0 python3.9[70372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:12:19 compute-0 sudo[70526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqrmhoynhpqtwhsnymeynpmpseieyvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609939.0785086-34-273489119297121/AnsiballZ_setup.py'
Dec 13 07:12:19 compute-0 sudo[70526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:19 compute-0 python3.9[70528]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:12:19 compute-0 sudo[70526]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:20 compute-0 sudo[70610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzjkavkercjoqfcdcmlukrrdkwyqrnai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765609939.0785086-34-273489119297121/AnsiballZ_dnf.py'
Dec 13 07:12:20 compute-0 sudo[70610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:20 compute-0 python3.9[70612]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 07:12:21 compute-0 sudo[70610]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:21 compute-0 python3.9[70763]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:22 compute-0 python3.9[70914]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 07:12:23 compute-0 python3.9[71064]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:12:23 compute-0 python3.9[71214]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:12:24 compute-0 sshd-session[70222]: Connection closed by 192.168.122.30 port 52348
Dec 13 07:12:24 compute-0 sshd-session[70219]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:12:24 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 07:12:24 compute-0 systemd[1]: session-16.scope: Consumed 4.394s CPU time.
Dec 13 07:12:24 compute-0 systemd-logind[745]: Session 16 logged out. Waiting for processes to exit.
Dec 13 07:12:24 compute-0 systemd-logind[745]: Removed session 16.
Dec 13 07:12:30 compute-0 sshd-session[71239]: Accepted publickey for zuul from 192.168.25.167 port 56916 ssh2: RSA SHA256:6D1WjYOFjoFBsumnInA3EGvtTfCaVlI9gahR8Wfk2Jc
Dec 13 07:12:30 compute-0 systemd-logind[745]: New session 17 of user zuul.
Dec 13 07:12:30 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 13 07:12:30 compute-0 sshd-session[71239]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:12:30 compute-0 sudo[71315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndmwheuanrqltcvptqyomwrildrgxbd ; /usr/bin/python3'
Dec 13 07:12:30 compute-0 sudo[71315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:31 compute-0 useradd[71319]: new group: name=ceph-admin, GID=42478
Dec 13 07:12:31 compute-0 useradd[71319]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 13 07:12:31 compute-0 sudo[71315]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:31 compute-0 sudo[71401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhilbbytgikaxgtgywthmdjaecyyocev ; /usr/bin/python3'
Dec 13 07:12:31 compute-0 sudo[71401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:31 compute-0 sudo[71401]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:31 compute-0 sudo[71474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nodqlquqjpootxnrwtqcubagkfzbvdpw ; /usr/bin/python3'
Dec 13 07:12:31 compute-0 sudo[71474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:31 compute-0 sudo[71474]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:31 compute-0 sudo[71524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euyrnuhalkajibtyyomgrsojzzzjkbyb ; /usr/bin/python3'
Dec 13 07:12:31 compute-0 sudo[71524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:32 compute-0 sudo[71524]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:32 compute-0 sudo[71550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbwrksartrkqbqknkieccywifpgndxp ; /usr/bin/python3'
Dec 13 07:12:32 compute-0 sudo[71550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:32 compute-0 sudo[71550]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:32 compute-0 sudo[71576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwkviuxboeueevztizppsfvuudtfchwl ; /usr/bin/python3'
Dec 13 07:12:32 compute-0 sudo[71576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:32 compute-0 sudo[71576]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:32 compute-0 sudo[71602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgsustoxzwfnemlajdfqehtxtatrboxw ; /usr/bin/python3'
Dec 13 07:12:32 compute-0 sudo[71602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:32 compute-0 sudo[71602]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:33 compute-0 sudo[71680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayeyiypeptckgowghojwxzsvxnnuqxns ; /usr/bin/python3'
Dec 13 07:12:33 compute-0 sudo[71680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:33 compute-0 sudo[71680]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:33 compute-0 sudo[71753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumedaurbmpohmyotewogozkpvpgqmcl ; /usr/bin/python3'
Dec 13 07:12:33 compute-0 sudo[71753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:33 compute-0 sudo[71753]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:33 compute-0 sudo[71855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhbfymgncxbiuwsyybhxwsfkrbbwzox ; /usr/bin/python3'
Dec 13 07:12:33 compute-0 sudo[71855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:33 compute-0 sudo[71855]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:33 compute-0 sudo[71928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vljxbtkhxlonyrywztmmofmapwtkpifc ; /usr/bin/python3'
Dec 13 07:12:33 compute-0 sudo[71928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:34 compute-0 sudo[71928]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:34 compute-0 sudo[71978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dntbcwxiivhtogywikwzzihhafbzdlll ; /usr/bin/python3'
Dec 13 07:12:34 compute-0 sudo[71978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:34 compute-0 python3[71980]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:12:35 compute-0 sudo[71978]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:35 compute-0 sudo[72069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhsrbtmgkprilxkraplrecbtuosvaczl ; /usr/bin/python3'
Dec 13 07:12:35 compute-0 sudo[72069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:35 compute-0 python3[72071]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 07:12:36 compute-0 sudo[72069]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:36 compute-0 sudo[72096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrnulbjzqebhgjkbdrmtjrgpsjvuvjgt ; /usr/bin/python3'
Dec 13 07:12:36 compute-0 sudo[72096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:37 compute-0 python3[72098]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:37 compute-0 sudo[72096]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:37 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:12:37 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:12:37 compute-0 sudo[72123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrtegnbttgujjxfxknytnulyczvipeqo ; /usr/bin/python3'
Dec 13 07:12:37 compute-0 sudo[72123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:37 compute-0 python3[72125]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:37 compute-0 kernel: loop: module loaded
Dec 13 07:12:37 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 13 07:12:37 compute-0 sudo[72123]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:37 compute-0 sudo[72158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkwuckydyckcdsxprusndlybeubbebcn ; /usr/bin/python3'
Dec 13 07:12:37 compute-0 sudo[72158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:37 compute-0 python3[72160]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:37 compute-0 lvm[72163]: PV /dev/loop3 not used.
Dec 13 07:12:37 compute-0 lvm[72172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:12:37 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 13 07:12:37 compute-0 sudo[72158]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:37 compute-0 lvm[72174]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 13 07:12:37 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 13 07:12:37 compute-0 sudo[72250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsrelauonkzglnwcjuxeaxtpxpnveyv ; /usr/bin/python3'
Dec 13 07:12:37 compute-0 sudo[72250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:38 compute-0 python3[72252]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:12:38 compute-0 sudo[72250]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:38 compute-0 sudo[72323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqjsfzssciwdnzlsybpqauqlnnfxxsp ; /usr/bin/python3'
Dec 13 07:12:38 compute-0 sudo[72323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:38 compute-0 python3[72325]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609957.8479996-36604-11195434469407/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:38 compute-0 sudo[72323]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:38 compute-0 sudo[72373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubdduijxuycjrupallopsaqgzxepsfrc ; /usr/bin/python3'
Dec 13 07:12:38 compute-0 sudo[72373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:38 compute-0 python3[72375]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:12:38 compute-0 systemd[1]: Reloading.
Dec 13 07:12:39 compute-0 systemd-sysv-generator[72404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:12:39 compute-0 systemd-rc-local-generator[72397]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:12:39 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 13 07:12:39 compute-0 bash[72414]: /dev/loop3: [64513]:4327953 (/var/lib/ceph-osd-0.img)
Dec 13 07:12:39 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 13 07:12:39 compute-0 lvm[72415]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:12:39 compute-0 lvm[72415]: VG ceph_vg0 finished
Dec 13 07:12:39 compute-0 sudo[72373]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:39 compute-0 sudo[72439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjduqvxshtzzunedfjbznuuqlfghvfga ; /usr/bin/python3'
Dec 13 07:12:39 compute-0 sudo[72439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:39 compute-0 python3[72441]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 07:12:40 compute-0 sudo[72439]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:40 compute-0 sudo[72466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthgatzcsobdbpbqurkqdeqhacnovygh ; /usr/bin/python3'
Dec 13 07:12:40 compute-0 sudo[72466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:40 compute-0 python3[72468]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:40 compute-0 sudo[72466]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:40 compute-0 sudo[72492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sahrhnvosqterqdegrqzrfmlarryjfce ; /usr/bin/python3'
Dec 13 07:12:40 compute-0 sudo[72492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:40 compute-0 python3[72494]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:40 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 13 07:12:40 compute-0 sudo[72492]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:41 compute-0 sudo[72524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypvlgdlmipiiuqgsdnwdugmoxlcgwzla ; /usr/bin/python3'
Dec 13 07:12:41 compute-0 sudo[72524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:41 compute-0 python3[72526]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:41 compute-0 lvm[72529]: PV /dev/loop4 not used.
Dec 13 07:12:41 compute-0 lvm[72538]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:12:41 compute-0 sudo[72524]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:41 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 13 07:12:41 compute-0 lvm[72540]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 13 07:12:41 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 13 07:12:41 compute-0 sudo[72616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouxwpwcmhelyutyflngblermyetqgopt ; /usr/bin/python3'
Dec 13 07:12:41 compute-0 sudo[72616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:41 compute-0 python3[72618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:12:41 compute-0 sudo[72616]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:41 compute-0 sudo[72689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpmluhyldzrwrxxesfxdxresfvxnnvgh ; /usr/bin/python3'
Dec 13 07:12:41 compute-0 sudo[72689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:41 compute-0 python3[72691]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609961.447473-36631-219866393732014/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:41 compute-0 sudo[72689]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:42 compute-0 sudo[72739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyytvxtpkeptectoyrigcmjiylhpimht ; /usr/bin/python3'
Dec 13 07:12:42 compute-0 sudo[72739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:42 compute-0 python3[72741]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:12:42 compute-0 systemd[1]: Reloading.
Dec 13 07:12:42 compute-0 systemd-rc-local-generator[72764]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:12:42 compute-0 systemd-sysv-generator[72767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:12:42 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 13 07:12:42 compute-0 bash[72781]: /dev/loop4: [64513]:4327955 (/var/lib/ceph-osd-1.img)
Dec 13 07:12:42 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 13 07:12:42 compute-0 lvm[72782]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:12:42 compute-0 lvm[72782]: VG ceph_vg1 finished
Dec 13 07:12:42 compute-0 sudo[72739]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:42 compute-0 sudo[72806]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myajwobpyngxzqnoeefnxmqftzrysvia ; /usr/bin/python3'
Dec 13 07:12:42 compute-0 sudo[72806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:42 compute-0 python3[72808]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 07:12:43 compute-0 sudo[72806]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:43 compute-0 sudo[72833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kttvzbvupcitppseihujukabslgodalg ; /usr/bin/python3'
Dec 13 07:12:43 compute-0 sudo[72833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:43 compute-0 python3[72835]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:43 compute-0 sudo[72833]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:44 compute-0 sudo[72859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuiatdrtygyvqjqqwawknpqogurvhgg ; /usr/bin/python3'
Dec 13 07:12:44 compute-0 sudo[72859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:44 compute-0 python3[72861]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:44 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 13 07:12:44 compute-0 sudo[72859]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:44 compute-0 sudo[72891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqavukceruqkxkjgvrenlnsiigjjmgfm ; /usr/bin/python3'
Dec 13 07:12:44 compute-0 sudo[72891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:44 compute-0 python3[72893]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:44 compute-0 lvm[72896]: PV /dev/loop5 not used.
Dec 13 07:12:44 compute-0 lvm[72906]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:12:44 compute-0 sudo[72891]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:44 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 13 07:12:44 compute-0 lvm[72908]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 13 07:12:44 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 13 07:12:44 compute-0 sudo[72984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhikckfnjuihkuiukvlfqjjzbxpzotqb ; /usr/bin/python3'
Dec 13 07:12:44 compute-0 sudo[72984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:44 compute-0 python3[72986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:12:44 compute-0 sudo[72984]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:45 compute-0 sudo[73057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmqsaccjffbfjfttzqqvxkwijmbqlemi ; /usr/bin/python3'
Dec 13 07:12:45 compute-0 sudo[73057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:45 compute-0 python3[73059]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609964.698612-36658-50672676213534/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:45 compute-0 sudo[73057]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:45 compute-0 sudo[73107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inohbiclxzewkqgokbzbbojbdhbkzlgd ; /usr/bin/python3'
Dec 13 07:12:45 compute-0 sudo[73107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:45 compute-0 python3[73109]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:12:45 compute-0 systemd[1]: Reloading.
Dec 13 07:12:45 compute-0 systemd-rc-local-generator[73131]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:12:45 compute-0 systemd-sysv-generator[73135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:12:45 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 13 07:12:45 compute-0 bash[73148]: /dev/loop5: [64513]:4327967 (/var/lib/ceph-osd-2.img)
Dec 13 07:12:45 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 13 07:12:45 compute-0 lvm[73149]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:12:45 compute-0 lvm[73149]: VG ceph_vg2 finished
Dec 13 07:12:45 compute-0 sudo[73107]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:47 compute-0 python3[73173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:12:48 compute-0 sudo[73264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkuijcubhdgcqulvemtnopulbcwgfcds ; /usr/bin/python3'
Dec 13 07:12:48 compute-0 sudo[73264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:49 compute-0 python3[73266]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 07:12:50 compute-0 sudo[73264]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:50 compute-0 sudo[73321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmfiorkejkpwhaxdweucxjbmhzlpdsgz ; /usr/bin/python3'
Dec 13 07:12:50 compute-0 sudo[73321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:50 compute-0 python3[73323]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 07:12:53 compute-0 groupadd[73335]: group added to /etc/group: name=cephadm, GID=992
Dec 13 07:12:53 compute-0 groupadd[73335]: group added to /etc/gshadow: name=cephadm
Dec 13 07:12:53 compute-0 groupadd[73335]: new group: name=cephadm, GID=992
Dec 13 07:12:53 compute-0 useradd[73342]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 13 07:12:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:12:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:12:54 compute-0 sudo[73321]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:12:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:12:54 compute-0 systemd[1]: run-r5911b45104d548e08c8fbb96fa64b6ce.service: Deactivated successfully.
Dec 13 07:12:54 compute-0 sudo[73443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebkrnkaefcxjzwowkksdutbympbpumwr ; /usr/bin/python3'
Dec 13 07:12:54 compute-0 sudo[73443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:54 compute-0 python3[73445]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:54 compute-0 sudo[73443]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:54 compute-0 sudo[73471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jecmxcykzfwyrlixcxrvtkqctkbftmuz ; /usr/bin/python3'
Dec 13 07:12:54 compute-0 sudo[73471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:54 compute-0 python3[73473]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:12:54 compute-0 sudo[73471]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:55 compute-0 sudo[73507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyftaqwsyhlkqgxhlkoywknzhuechgnz ; /usr/bin/python3'
Dec 13 07:12:55 compute-0 sudo[73507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:55 compute-0 python3[73509]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:55 compute-0 sudo[73507]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:55 compute-0 sudo[73533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-innfiwoylesdpsknkkirresravedqjov ; /usr/bin/python3'
Dec 13 07:12:55 compute-0 sudo[73533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:55 compute-0 python3[73535]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:55 compute-0 sudo[73533]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:55 compute-0 sudo[73611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfepzqqsgzqhvuwqkjqhimcwiddvmhjz ; /usr/bin/python3'
Dec 13 07:12:55 compute-0 sudo[73611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:56 compute-0 python3[73613]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:12:56 compute-0 sudo[73611]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:56 compute-0 sudo[73684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eboirfomuqmcjbtvehrjkvkktzejejnz ; /usr/bin/python3'
Dec 13 07:12:56 compute-0 sudo[73684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:56 compute-0 python3[73686]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609975.884619-36806-246648357210251/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:56 compute-0 sudo[73684]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:56 compute-0 sudo[73786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmhosmvrkzkdkwkuojmwrzdisshvhfu ; /usr/bin/python3'
Dec 13 07:12:56 compute-0 sudo[73786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:56 compute-0 python3[73788]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:12:56 compute-0 sudo[73786]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:57 compute-0 sudo[73859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawoykjvtrytcanxsicasnmlgymxfaku ; /usr/bin/python3'
Dec 13 07:12:57 compute-0 sudo[73859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:57 compute-0 python3[73861]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765609976.7038133-36824-145414303229595/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:12:57 compute-0 sudo[73859]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:57 compute-0 sudo[73909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfydszxttgxmopnqzxgwztmfcggsufze ; /usr/bin/python3'
Dec 13 07:12:57 compute-0 sudo[73909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:57 compute-0 python3[73911]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:57 compute-0 sudo[73909]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:57 compute-0 sudo[73937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfozrwaawfblceeyrxvkmntnfjbnszmi ; /usr/bin/python3'
Dec 13 07:12:57 compute-0 sudo[73937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:57 compute-0 python3[73939]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:57 compute-0 sudo[73937]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:57 compute-0 sudo[73965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjxawqodbtemeohhsacbdbfphfkjlca ; /usr/bin/python3'
Dec 13 07:12:57 compute-0 sudo[73965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:57 compute-0 python3[73967]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:12:57 compute-0 sudo[73965]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:58 compute-0 sudo[73993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjybtsbwlufexheaxppqcdpyyfitjvxt ; /usr/bin/python3'
Dec 13 07:12:58 compute-0 sudo[73993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:12:58 compute-0 python3[73995]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:12:58 compute-0 sshd-session[73999]: Accepted publickey for ceph-admin from 192.168.122.100 port 40402 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:12:58 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 07:12:58 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 07:12:58 compute-0 systemd-logind[745]: New session 18 of user ceph-admin.
Dec 13 07:12:58 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 07:12:58 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 13 07:12:58 compute-0 systemd[74003]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:12:58 compute-0 systemd[74003]: Queued start job for default target Main User Target.
Dec 13 07:12:58 compute-0 systemd[74003]: Created slice User Application Slice.
Dec 13 07:12:58 compute-0 systemd[74003]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 07:12:58 compute-0 systemd[74003]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 07:12:58 compute-0 systemd[74003]: Reached target Paths.
Dec 13 07:12:58 compute-0 systemd[74003]: Reached target Timers.
Dec 13 07:12:58 compute-0 systemd[74003]: Starting D-Bus User Message Bus Socket...
Dec 13 07:12:58 compute-0 systemd[74003]: Starting Create User's Volatile Files and Directories...
Dec 13 07:12:58 compute-0 systemd[74003]: Finished Create User's Volatile Files and Directories.
Dec 13 07:12:58 compute-0 systemd[74003]: Listening on D-Bus User Message Bus Socket.
Dec 13 07:12:58 compute-0 systemd[74003]: Reached target Sockets.
Dec 13 07:12:58 compute-0 systemd[74003]: Reached target Basic System.
Dec 13 07:12:58 compute-0 systemd[74003]: Reached target Main User Target.
Dec 13 07:12:58 compute-0 systemd[74003]: Startup finished in 94ms.
Dec 13 07:12:58 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 13 07:12:58 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Dec 13 07:12:58 compute-0 sshd-session[73999]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:12:58 compute-0 sudo[74019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 13 07:12:58 compute-0 sudo[74019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:12:58 compute-0 sudo[74019]: pam_unix(sudo:session): session closed for user root
Dec 13 07:12:58 compute-0 sshd-session[74018]: Received disconnect from 192.168.122.100 port 40402:11: disconnected by user
Dec 13 07:12:58 compute-0 sshd-session[74018]: Disconnected from user ceph-admin 192.168.122.100 port 40402
Dec 13 07:12:58 compute-0 sshd-session[73999]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 13 07:12:58 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 07:12:58 compute-0 systemd-logind[745]: Session 18 logged out. Waiting for processes to exit.
Dec 13 07:12:58 compute-0 systemd-logind[745]: Removed session 18.
Dec 13 07:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1829252655-merged.mount: Deactivated successfully.
Dec 13 07:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1829252655-lower\x2dmapped.mount: Deactivated successfully.
Dec 13 07:13:08 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 13 07:13:08 compute-0 systemd[74003]: Activating special unit Exit the Session...
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped target Main User Target.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped target Basic System.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped target Paths.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped target Sockets.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped target Timers.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 07:13:08 compute-0 systemd[74003]: Closed D-Bus User Message Bus Socket.
Dec 13 07:13:08 compute-0 systemd[74003]: Stopped Create User's Volatile Files and Directories.
Dec 13 07:13:08 compute-0 systemd[74003]: Removed slice User Application Slice.
Dec 13 07:13:08 compute-0 systemd[74003]: Reached target Shutdown.
Dec 13 07:13:08 compute-0 systemd[74003]: Finished Exit the Session.
Dec 13 07:13:08 compute-0 systemd[74003]: Reached target Exit the Session.
Dec 13 07:13:08 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 13 07:13:08 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 13 07:13:08 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 13 07:13:08 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 13 07:13:08 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 13 07:13:08 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 13 07:13:08 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 13 07:13:16 compute-0 chronyd[58459]: Selected source 141.11.228.173 (pool.ntp.org)
Dec 13 07:13:18 compute-0 podman[74092]: 2025-12-13 07:13:18.69487066 +0000 UTC m=+19.873987126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.741129831 +0000 UTC m=+0.027033535 container create 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1329027727-merged.mount: Deactivated successfully.
Dec 13 07:13:18 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 13 07:13:18 compute-0 systemd[1]: Started libpod-conmon-843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f.scope.
Dec 13 07:13:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.805304934 +0000 UTC m=+0.091208659 container init 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.810917366 +0000 UTC m=+0.096821071 container start 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.814462601 +0000 UTC m=+0.100366306 container attach 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.730417912 +0000 UTC m=+0.016321637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:18 compute-0 ecstatic_sammet[74157]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 07:13:18 compute-0 systemd[1]: libpod-843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f.scope: Deactivated successfully.
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.892663855 +0000 UTC m=+0.178567560 container died 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 07:13:18 compute-0 podman[74144]: 2025-12-13 07:13:18.910463899 +0000 UTC m=+0.196367604 container remove 843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f (image=quay.io/ceph/ceph:v20, name=ecstatic_sammet, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:18 compute-0 systemd[1]: libpod-conmon-843090b43018c328505c29f9accca29716b5f498cea87f750d2998aea9dcef6f.scope: Deactivated successfully.
Dec 13 07:13:18 compute-0 podman[74170]: 2025-12-13 07:13:18.955504699 +0000 UTC m=+0.028646289 container create f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:13:18 compute-0 systemd[1]: Started libpod-conmon-f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954.scope.
Dec 13 07:13:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:19 compute-0 podman[74170]: 2025-12-13 07:13:19.003241508 +0000 UTC m=+0.076383097 container init f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:13:19 compute-0 podman[74170]: 2025-12-13 07:13:19.007599331 +0000 UTC m=+0.080740920 container start f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:13:19 compute-0 podman[74170]: 2025-12-13 07:13:19.008610061 +0000 UTC m=+0.081751641 container attach f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 07:13:19 compute-0 eager_antonelli[74184]: 167 167
Dec 13 07:13:19 compute-0 systemd[1]: libpod-f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74170]: 2025-12-13 07:13:19.010745636 +0000 UTC m=+0.083887225 container died f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 07:13:19 compute-0 podman[74170]: 2025-12-13 07:13:19.027735559 +0000 UTC m=+0.100877148 container remove f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954 (image=quay.io/ceph/ceph:v20, name=eager_antonelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 07:13:19 compute-0 podman[74170]: 2025-12-13 07:13:18.944317838 +0000 UTC m=+0.017459426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:19 compute-0 systemd[1]: libpod-conmon-f35db3dabbd45f5a4aabf7e14a41193930df696cb0bf8e7d0a598778472c4954.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.073056355 +0000 UTC m=+0.027654473 container create 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:19 compute-0 systemd[1]: Started libpod-conmon-292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664.scope.
Dec 13 07:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.131662308 +0000 UTC m=+0.086260445 container init 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.135733101 +0000 UTC m=+0.090331219 container start 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.138043345 +0000 UTC m=+0.092641463 container attach 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 07:13:19 compute-0 gracious_austin[74213]: AQAPEj1pehcICRAARZufc0en6jBlZpV54gKkPA==
Dec 13 07:13:19 compute-0 systemd[1]: libpod-292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.154538977 +0000 UTC m=+0.109137096 container died 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.062011609 +0000 UTC m=+0.016609747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:19 compute-0 podman[74198]: 2025-12-13 07:13:19.16935334 +0000 UTC m=+0.123951458 container remove 292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664 (image=quay.io/ceph/ceph:v20, name=gracious_austin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:19 compute-0 systemd[1]: libpod-conmon-292dfa3faea3a69a7d71aa833970bb632ab3b295fffdeadb2df1aa0a6af12664.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.212465934 +0000 UTC m=+0.027649945 container create 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:19 compute-0 systemd[1]: Started libpod-conmon-9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196.scope.
Dec 13 07:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.248752205 +0000 UTC m=+0.063936226 container init 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.252836524 +0000 UTC m=+0.068020535 container start 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.25381325 +0000 UTC m=+0.068997271 container attach 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:19 compute-0 sleepy_stonebraker[74243]: AQAPEj1phbX6DxAAPe7HcwW+L6g3PIj7SIL+Ag==
Dec 13 07:13:19 compute-0 systemd[1]: libpod-9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.270745183 +0000 UTC m=+0.085929193 container died 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.285119047 +0000 UTC m=+0.100303058 container remove 9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196 (image=quay.io/ceph/ceph:v20, name=sleepy_stonebraker, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:19 compute-0 podman[74230]: 2025-12-13 07:13:19.201953771 +0000 UTC m=+0.017137781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:19 compute-0 systemd[1]: libpod-conmon-9f04ed7439dd2b39a95b2f2c5be2cbb1773b35da21cc4a0139ea201d157b5196.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74259]: 2025-12-13 07:13:19.327155879 +0000 UTC m=+0.025906026 container create 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:19 compute-0 systemd[1]: Started libpod-conmon-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope.
Dec 13 07:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:19 compute-0 podman[74259]: 2025-12-13 07:13:19.368161222 +0000 UTC m=+0.066911389 container init 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:13:19 compute-0 podman[74259]: 2025-12-13 07:13:19.37230402 +0000 UTC m=+0.071054168 container start 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:19 compute-0 podman[74259]: 2025-12-13 07:13:19.377404178 +0000 UTC m=+0.076154326 container attach 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:19 compute-0 fervent_euclid[74273]: AQAPEj1p07ghFxAAwTzIQ0nXvXHqoZYms/j+zg==
Dec 13 07:13:19 compute-0 systemd[1]: libpod-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 conmon[74273]: conmon 8e2665a2599968bfed91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope/container/memory.events
Dec 13 07:13:19 compute-0 podman[74259]: 2025-12-13 07:13:19.316705421 +0000 UTC m=+0.015455568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:19 compute-0 podman[74280]: 2025-12-13 07:13:19.420894163 +0000 UTC m=+0.016366441 container died 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:19 compute-0 podman[74280]: 2025-12-13 07:13:19.695948734 +0000 UTC m=+0.291421003 container remove 8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334 (image=quay.io/ceph/ceph:v20, name=fervent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6310d58fa2854cc669f16badea06eb1e982e6e6ca603511916af7c706c4087fe-merged.mount: Deactivated successfully.
Dec 13 07:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:13:19 compute-0 systemd[1]: libpod-conmon-8e2665a2599968bfed91303ed96c93c8ada409a2b26f8abcbab4ab57f3e71334.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.742755024 +0000 UTC m=+0.027614298 container create 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:13:19 compute-0 systemd[1]: Started libpod-conmon-5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc.scope.
Dec 13 07:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1596962d5b78904796d452d34521e5a0050d6cc353e52c16371e4a815a7ca87/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.784605826 +0000 UTC m=+0.069465120 container init 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.788466884 +0000 UTC m=+0.073326158 container start 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.789554289 +0000 UTC m=+0.074413563 container attach 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:13:19 compute-0 friendly_ellis[74305]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 13 07:13:19 compute-0 friendly_ellis[74305]: setting min_mon_release = tentacle
Dec 13 07:13:19 compute-0 friendly_ellis[74305]: /usr/bin/monmaptool: set fsid to 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:19 compute-0 friendly_ellis[74305]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 13 07:13:19 compute-0 systemd[1]: libpod-5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.812685177 +0000 UTC m=+0.097544451 container died 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1596962d5b78904796d452d34521e5a0050d6cc353e52c16371e4a815a7ca87-merged.mount: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.731826507 +0000 UTC m=+0.016685801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:19 compute-0 podman[74292]: 2025-12-13 07:13:19.828984139 +0000 UTC m=+0.113843414 container remove 5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc (image=quay.io/ceph/ceph:v20, name=friendly_ellis, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:13:19 compute-0 systemd[1]: libpod-conmon-5b6f792928ad4c4a8ecafc4d3aebb64227b89fc326c49c49af45f343140f6fbc.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.870787673 +0000 UTC m=+0.026485806 container create b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 07:13:19 compute-0 systemd[1]: Started libpod-conmon-b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950.scope.
Dec 13 07:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c118276240ee5241c8c63028f9ca6118e65c93e4e4183af1c99e4ca7b28e5a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.917617456 +0000 UTC m=+0.073315600 container init b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.921814136 +0000 UTC m=+0.077512260 container start b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.923145269 +0000 UTC m=+0.078843392 container attach b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.860122542 +0000 UTC m=+0.015820675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:19 compute-0 systemd[1]: libpod-b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950.scope: Deactivated successfully.
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.966902736 +0000 UTC m=+0.122600869 container died b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:13:19 compute-0 podman[74323]: 2025-12-13 07:13:19.985153378 +0000 UTC m=+0.140851501 container remove b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950 (image=quay.io/ceph/ceph:v20, name=adoring_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:19 compute-0 systemd[1]: libpod-conmon-b0a0435bdd1f44dc99c22013de3955bec34d9b791d2fc91051e41c9171b2e950.scope: Deactivated successfully.
Dec 13 07:13:20 compute-0 systemd[1]: Reloading.
Dec 13 07:13:20 compute-0 systemd-rc-local-generator[74394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:20 compute-0 systemd-sysv-generator[74398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:13:20 compute-0 systemd[1]: Reloading.
Dec 13 07:13:20 compute-0 systemd-rc-local-generator[74430]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:20 compute-0 systemd-sysv-generator[74435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:20 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 13 07:13:20 compute-0 systemd[1]: Reloading.
Dec 13 07:13:20 compute-0 systemd-sysv-generator[74472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:20 compute-0 systemd-rc-local-generator[74469]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:20 compute-0 systemd[1]: Reached target Ceph cluster 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:20 compute-0 systemd[1]: Reloading.
Dec 13 07:13:20 compute-0 systemd-sysv-generator[74514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:20 compute-0 systemd-rc-local-generator[74511]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:20 compute-0 systemd[1]: Reloading.
Dec 13 07:13:20 compute-0 systemd-rc-local-generator[74547]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:20 compute-0 systemd-sysv-generator[74550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:21 compute-0 systemd[1]: Created slice Slice /system/ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:21 compute-0 systemd[1]: Reached target System Time Set.
Dec 13 07:13:21 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 13 07:13:21 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 07:13:21 compute-0 podman[74604]: 2025-12-13 07:13:21.219263729 +0000 UTC m=+0.028942505 container create cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 podman[74604]: 2025-12-13 07:13:21.258832361 +0000 UTC m=+0.068511157 container init cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 07:13:21 compute-0 podman[74604]: 2025-12-13 07:13:21.263579425 +0000 UTC m=+0.073258203 container start cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:13:21 compute-0 bash[74604]: cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7
Dec 13 07:13:21 compute-0 podman[74604]: 2025-12-13 07:13:21.2076853 +0000 UTC m=+0.017364097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:21 compute-0 systemd[1]: Started Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:21 compute-0 ceph-mon[74620]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: pidfile_write: ignore empty --pid-file
Dec 13 07:13:21 compute-0 ceph-mon[74620]: load: jerasure load: lrc 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Git sha 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: DB SUMMARY
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: DB Session ID:  GNXSPATIKNS7K26A5HYA
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                                     Options.env: 0x55e1e6c60440
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                                Options.info_log: 0x55e1e7fddd60
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                                 Options.wal_dir: 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                    Options.write_buffer_manager: 0x55e1e7fe0140
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                               Options.row_cache: None
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                              Options.wal_filter: None
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.wal_compression: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.max_background_jobs: 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Compression algorithms supported:
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kZSTD supported: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:           Options.merge_operator: 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:        Options.compaction_filter: None
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1e7fdccc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e1e7fd38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.compression: NoCompression
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.num_levels: 7
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3758b366-8ed2-410f-a091-1c92e1b75bd7
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610001301195, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610001302199, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "GNXSPATIKNS7K26A5HYA", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610001302293, "job": 1, "event": "recovery_finished"}
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e1e7ffee00
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: DB pointer 0x55e1e814a000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:13:21 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1e7fd38d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 07:13:21 compute-0 ceph-mon[74620]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@-1(???) e0 preinit fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 13 07:13:21 compute-0 ceph-mon[74620]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 13 07:13:21 compute-0 podman[74621]: 2025-12-13 07:13:21.314105266 +0000 UTC m=+0.028135116 container create 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 07:13:21 compute-0 ceph-mon[74620]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T07:13:19.809500+0000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : created 2025-12-13T07:13:19.809500+0000
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC 7763 64-Core Processor,created_at=2025-12-13T07:13:19.948653Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865356,os=Linux}
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).mds e1 new map
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-13T07:13:21:319345+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mkfs 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 07:13:21 compute-0 systemd[1]: Started libpod-conmon-207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69.scope.
Dec 13 07:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664f60b7b9757237563ec36a8e33f29399fd29ab2d3f7319849f1b99e6d0ba28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664f60b7b9757237563ec36a8e33f29399fd29ab2d3f7319849f1b99e6d0ba28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664f60b7b9757237563ec36a8e33f29399fd29ab2d3f7319849f1b99e6d0ba28/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 podman[74621]: 2025-12-13 07:13:21.370816097 +0000 UTC m=+0.084845946 container init 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:13:21 compute-0 podman[74621]: 2025-12-13 07:13:21.37556204 +0000 UTC m=+0.089591889 container start 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 07:13:21 compute-0 podman[74621]: 2025-12-13 07:13:21.376758479 +0000 UTC m=+0.090788328 container attach 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:21 compute-0 podman[74621]: 2025-12-13 07:13:21.303897166 +0000 UTC m=+0.017927035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3560127887' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:   cluster:
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     id:     00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     health: HEALTH_OK
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:  
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:   services:
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     mon: 1 daemons, quorum compute-0 (age 0.206755s) [leader: compute-0]
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     mgr: no daemons active
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     osd: 0 osds: 0 up, 0 in
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:  
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:   data:
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     pools:   0 pools, 0 pgs
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     objects: 0 objects, 0 B
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     usage:   0 B used, 0 B / 0 B avail
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:     pgs:     
Dec 13 07:13:21 compute-0 hardcore_jang[74672]:  
Dec 13 07:13:21 compute-0 systemd[1]: libpod-207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69.scope: Deactivated successfully.
Dec 13 07:13:21 compute-0 podman[74698]: 2025-12-13 07:13:21.568643165 +0000 UTC m=+0.018044155 container died 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:13:21 compute-0 podman[74698]: 2025-12-13 07:13:21.583911471 +0000 UTC m=+0.033312461 container remove 207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69 (image=quay.io/ceph/ceph:v20, name=hardcore_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:21 compute-0 systemd[1]: libpod-conmon-207d7cd201be0491f2987b475ac026709a7e11643a313799f8a27060fd57aa69.scope: Deactivated successfully.
Dec 13 07:13:21 compute-0 podman[74711]: 2025-12-13 07:13:21.627903569 +0000 UTC m=+0.026149073 container create b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:13:21 compute-0 systemd[1]: Started libpod-conmon-b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c.scope.
Dec 13 07:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 podman[74711]: 2025-12-13 07:13:21.692299308 +0000 UTC m=+0.090544812 container init b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:21 compute-0 podman[74711]: 2025-12-13 07:13:21.697024562 +0000 UTC m=+0.095270056 container start b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:21 compute-0 podman[74711]: 2025-12-13 07:13:21.69826375 +0000 UTC m=+0.096509265 container attach b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:21 compute-0 podman[74711]: 2025-12-13 07:13:21.617958442 +0000 UTC m=+0.016203956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:21 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1660714730' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 07:13:21 compute-0 ceph-mon[74620]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1660714730' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 07:13:21 compute-0 quizzical_shannon[74724]: 
Dec 13 07:13:21 compute-0 quizzical_shannon[74724]: [global]
Dec 13 07:13:21 compute-0 quizzical_shannon[74724]:         fsid = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:21 compute-0 quizzical_shannon[74724]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 07:13:21 compute-0 quizzical_shannon[74724]:         osd_crush_chooseleaf_type = 0
Dec 13 07:13:21 compute-0 systemd[1]: libpod-b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c.scope: Deactivated successfully.
Dec 13 07:13:21 compute-0 podman[74750]: 2025-12-13 07:13:21.882955824 +0000 UTC m=+0.016315565 container died b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-072e7c47e148b4ea0f2a661df1575a2e5825b9ed5b041e7b11dc8a20e9ba9dcc-merged.mount: Deactivated successfully.
Dec 13 07:13:21 compute-0 podman[74750]: 2025-12-13 07:13:21.901077585 +0000 UTC m=+0.034437315 container remove b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c (image=quay.io/ceph/ceph:v20, name=quizzical_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:21 compute-0 systemd[1]: libpod-conmon-b57acc39a6a2351ad7be86559985ca692b0086f943b77417be50296f5826741c.scope: Deactivated successfully.
Dec 13 07:13:21 compute-0 podman[74761]: 2025-12-13 07:13:21.946285328 +0000 UTC m=+0.026601263 container create 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:13:21 compute-0 systemd[1]: Started libpod-conmon-1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0.scope.
Dec 13 07:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:21 compute-0 podman[74761]: 2025-12-13 07:13:21.99116264 +0000 UTC m=+0.071478595 container init 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:21 compute-0 podman[74761]: 2025-12-13 07:13:21.998224758 +0000 UTC m=+0.078540693 container start 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:13:21 compute-0 podman[74761]: 2025-12-13 07:13:21.999731501 +0000 UTC m=+0.080047436 container attach 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:13:22 compute-0 podman[74761]: 2025-12-13 07:13:21.936533874 +0000 UTC m=+0.016849830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:22 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:22 compute-0 ceph-mon[74620]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153588289' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:22 compute-0 systemd[1]: libpod-1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0.scope: Deactivated successfully.
Dec 13 07:13:22 compute-0 podman[74761]: 2025-12-13 07:13:22.154952275 +0000 UTC m=+0.235268201 container died 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9ed8e6f918487ab094cf746fa255a2156e4a789bfb755a5931a4a5f4b3a440c-merged.mount: Deactivated successfully.
Dec 13 07:13:22 compute-0 podman[74761]: 2025-12-13 07:13:22.17715061 +0000 UTC m=+0.257466546 container remove 1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0 (image=quay.io/ceph/ceph:v20, name=priceless_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:13:22 compute-0 systemd[1]: libpod-conmon-1044d67893a00a4bb4c4cc1decc489c0c8238a203bc32bdbfe5e8d7f02a00be0.scope: Deactivated successfully.
Dec 13 07:13:22 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:13:22 compute-0 ceph-mon[74620]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 07:13:22 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 07:13:22 compute-0 ceph-mon[74620]: mon.compute-0@0(leader) e1 shutdown
Dec 13 07:13:22 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0[74616]: 2025-12-13T07:13:22.298+0000 7fb76168f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 07:13:22 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0[74616]: 2025-12-13T07:13:22.298+0000 7fb76168f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 07:13:22 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 07:13:22 compute-0 ceph-mon[74620]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 07:13:22 compute-0 podman[74833]: 2025-12-13 07:13:22.35133029 +0000 UTC m=+0.071721423 container died cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ea2ba308b61d2765f9655418fcf47bafa1ec42002fb2e6727140332a92248d-merged.mount: Deactivated successfully.
Dec 13 07:13:22 compute-0 podman[74833]: 2025-12-13 07:13:22.367197802 +0000 UTC m=+0.087588934 container remove cf19d3e7e89772e136718bcc204235ac2fd20a43efb8d66db68ecda56cf2d7b7 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:13:22 compute-0 bash[74833]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0
Dec 13 07:13:22 compute-0 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mon.compute-0.service: Deactivated successfully.
Dec 13 07:13:22 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:22 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:13:22 compute-0 podman[74912]: 2025-12-13 07:13:22.591811841 +0000 UTC m=+0.025433488 container create 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de23cd3ce62c8812bfbedb19351e447a14c16ae4f654415c23d9eadaf14158e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 podman[74912]: 2025-12-13 07:13:22.634746391 +0000 UTC m=+0.068368048 container init 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:13:22 compute-0 podman[74912]: 2025-12-13 07:13:22.639197369 +0000 UTC m=+0.072819016 container start 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:22 compute-0 bash[74912]: 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a
Dec 13 07:13:22 compute-0 podman[74912]: 2025-12-13 07:13:22.580805759 +0000 UTC m=+0.014427416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:22 compute-0 systemd[1]: Started Ceph mon.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:22 compute-0 ceph-mon[74928]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: pidfile_write: ignore empty --pid-file
Dec 13 07:13:22 compute-0 ceph-mon[74928]: load: jerasure load: lrc 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Git sha 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: DB SUMMARY
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: DB Session ID:  1EYF1QT48HSM3ZBGDMBQ
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 48303 ; 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                                     Options.env: 0x5642b92b0440
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                                Options.info_log: 0x5642ba2a2000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                                 Options.wal_dir: 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                    Options.write_buffer_manager: 0x5642ba296140
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                               Options.row_cache: None
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                              Options.wal_filter: None
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.wal_compression: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.max_background_jobs: 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Compression algorithms supported:
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kZSTD supported: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:           Options.merge_operator: 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:        Options.compaction_filter: None
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5642ba293a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5642ba289a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.compression: NoCompression
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.num_levels: 7
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3758b366-8ed2-410f-a091-1c92e1b75bd7
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610002671164, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610002672467, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 48181, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 123, "table_properties": {"data_size": 46755, "index_size": 132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2974, "raw_average_key_size": 31, "raw_value_size": 44371, "raw_average_value_size": 472, "num_data_blocks": 7, "num_entries": 94, "num_filter_entries": 94, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610002, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610002672563, "job": 1, "event": "recovery_finished"}
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5642ba2b4e00
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: DB pointer 0x5642ba406000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:13:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   48.95 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0   48.95 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 8.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 8.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5642ba289a30#2 capacity: 512.00 MB usage: 0.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.33 KB,6.25849e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 07:13:22 compute-0 ceph-mon[74928]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???) e1 preinit fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).mds e1 new map
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-13T07:13:21:319345+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 07:13:22 compute-0 ceph-mon[74928]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T07:13:19.809500+0000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : created 2025-12-13T07:13:19.809500+0000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 07:13:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 07:13:22 compute-0 podman[74929]: 2025-12-13 07:13:22.685495322 +0000 UTC m=+0.027740615 container create 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:22 compute-0 systemd[1]: Started libpod-conmon-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope.
Dec 13 07:13:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: monmap epoch 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:22 compute-0 ceph-mon[74928]: last_changed 2025-12-13T07:13:19.809500+0000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: created 2025-12-13T07:13:19.809500+0000
Dec 13 07:13:22 compute-0 ceph-mon[74928]: min_mon_release 20 (tentacle)
Dec 13 07:13:22 compute-0 ceph-mon[74928]: election_strategy: 1
Dec 13 07:13:22 compute-0 ceph-mon[74928]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 07:13:22 compute-0 ceph-mon[74928]: fsmap 
Dec 13 07:13:22 compute-0 ceph-mon[74928]: osdmap e1: 0 total, 0 up, 0 in
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mgrmap e1: no daemons active
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:22 compute-0 podman[74929]: 2025-12-13 07:13:22.747513541 +0000 UTC m=+0.089758833 container init 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:13:22 compute-0 podman[74929]: 2025-12-13 07:13:22.75284786 +0000 UTC m=+0.095093153 container start 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 13 07:13:22 compute-0 podman[74929]: 2025-12-13 07:13:22.754040221 +0000 UTC m=+0.096285514 container attach 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:22 compute-0 podman[74929]: 2025-12-13 07:13:22.675425731 +0000 UTC m=+0.017671044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 13 07:13:22 compute-0 systemd[1]: libpod-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope: Deactivated successfully.
Dec 13 07:13:22 compute-0 conmon[74980]: conmon 8e9101a52e46eb88d40b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope/container/memory.events
Dec 13 07:13:22 compute-0 podman[75006]: 2025-12-13 07:13:22.93955994 +0000 UTC m=+0.015565705 container died 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 07:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad06c9ead092b5c47c357fbd10a498694e68aeac8c0f74a34de693527fe47764-merged.mount: Deactivated successfully.
Dec 13 07:13:22 compute-0 podman[75006]: 2025-12-13 07:13:22.955252544 +0000 UTC m=+0.031258289 container remove 8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647 (image=quay.io/ceph/ceph:v20, name=affectionate_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:22 compute-0 systemd[1]: libpod-conmon-8e9101a52e46eb88d40b0cf74ce8b6f73510841513095c2f339f715921c2d647.scope: Deactivated successfully.
Dec 13 07:13:22 compute-0 podman[75017]: 2025-12-13 07:13:22.996790759 +0000 UTC m=+0.024430843 container create 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:13:23 compute-0 systemd[1]: Started libpod-conmon-111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20.scope.
Dec 13 07:13:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 podman[75017]: 2025-12-13 07:13:23.046964799 +0000 UTC m=+0.074604873 container init 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:23 compute-0 podman[75017]: 2025-12-13 07:13:23.05114594 +0000 UTC m=+0.078786014 container start 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:13:23 compute-0 podman[75017]: 2025-12-13 07:13:23.052277728 +0000 UTC m=+0.079917802 container attach 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:13:23 compute-0 podman[75017]: 2025-12-13 07:13:22.987106491 +0000 UTC m=+0.014746575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 13 07:13:23 compute-0 systemd[1]: libpod-111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20.scope: Deactivated successfully.
Dec 13 07:13:23 compute-0 podman[75017]: 2025-12-13 07:13:23.203381844 +0000 UTC m=+0.231021918 container died 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2c0eec1db992a251119fa9c29f35c62f8e3e0f17a50839b30e7eb548846f306-merged.mount: Deactivated successfully.
Dec 13 07:13:23 compute-0 podman[75017]: 2025-12-13 07:13:23.220914266 +0000 UTC m=+0.248554330 container remove 111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20 (image=quay.io/ceph/ceph:v20, name=zen_einstein, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:13:23 compute-0 systemd[1]: libpod-conmon-111135779fa8d2a5a5891cb252129bf40185748571adc3e93f282101d91b4d20.scope: Deactivated successfully.
Dec 13 07:13:23 compute-0 systemd[1]: Reloading.
Dec 13 07:13:23 compute-0 systemd-rc-local-generator[75087]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:23 compute-0 systemd-sysv-generator[75090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:23 compute-0 systemd[1]: Reloading.
Dec 13 07:13:23 compute-0 systemd-rc-local-generator[75131]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:23 compute-0 systemd-sysv-generator[75134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:23 compute-0 systemd[1]: Starting Ceph mgr.compute-0.qsherl for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:13:23 compute-0 podman[75184]: 2025-12-13 07:13:23.767603699 +0000 UTC m=+0.026056689 container create 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346fc788cab02aea4507e4bda75119dcdb6967076b2f735cd53af7434813aca9/merged/var/lib/ceph/mgr/ceph-compute-0.qsherl supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 podman[75184]: 2025-12-13 07:13:23.806732413 +0000 UTC m=+0.065185413 container init 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:23 compute-0 podman[75184]: 2025-12-13 07:13:23.811110995 +0000 UTC m=+0.069563985 container start 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283 (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:13:23 compute-0 bash[75184]: 4d78867918d5dd4dba36f3a6dc6db4122866221ae6fbf48a37819c5ae84e8283
Dec 13 07:13:23 compute-0 podman[75184]: 2025-12-13 07:13:23.756728462 +0000 UTC m=+0.015181463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:23 compute-0 systemd[1]: Started Ceph mgr.compute-0.qsherl for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:23 compute-0 ceph-mgr[75200]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:13:23 compute-0 ceph-mgr[75200]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 07:13:23 compute-0 ceph-mgr[75200]: pidfile_write: ignore empty --pid-file
Dec 13 07:13:23 compute-0 podman[75201]: 2025-12-13 07:13:23.8611115 +0000 UTC m=+0.028950380 container create 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:13:23 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'alerts'
Dec 13 07:13:23 compute-0 systemd[1]: Started libpod-conmon-9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e.scope.
Dec 13 07:13:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:23 compute-0 podman[75201]: 2025-12-13 07:13:23.920912039 +0000 UTC m=+0.088750929 container init 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:13:23 compute-0 podman[75201]: 2025-12-13 07:13:23.926493712 +0000 UTC m=+0.094332593 container start 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:13:23 compute-0 podman[75201]: 2025-12-13 07:13:23.928280762 +0000 UTC m=+0.096119662 container attach 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:13:23 compute-0 podman[75201]: 2025-12-13 07:13:23.850056445 +0000 UTC m=+0.017895335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:23 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'balancer'
Dec 13 07:13:24 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'cephadm'
Dec 13 07:13:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 07:13:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310964443' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]: 
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]: {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "health": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "status": "HEALTH_OK",
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "checks": {},
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "mutes": []
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "election_epoch": 5,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "quorum": [
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         0
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     ],
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "quorum_names": [
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "compute-0"
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     ],
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "quorum_age": 1,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "monmap": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "epoch": 1,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "min_mon_release_name": "tentacle",
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_mons": 1
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "osdmap": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "epoch": 1,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_osds": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_up_osds": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "osd_up_since": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_in_osds": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "osd_in_since": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_remapped_pgs": 0
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "pgmap": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "pgs_by_state": [],
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_pgs": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_pools": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_objects": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "data_bytes": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "bytes_used": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "bytes_avail": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "bytes_total": 0
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "fsmap": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "epoch": 1,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "by_rank": [],
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "up:standby": 0
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "mgrmap": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "available": false,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "num_standbys": 0,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "modules": [
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:             "iostat",
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:             "nfs"
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         ],
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "services": {}
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "servicemap": {
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "epoch": 1,
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:         "services": {}
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     },
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]:     "progress_events": {}
Dec 13 07:13:24 compute-0 quizzical_joliot[75234]: }
Dec 13 07:13:24 compute-0 systemd[1]: libpod-9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e.scope: Deactivated successfully.
Dec 13 07:13:24 compute-0 podman[75201]: 2025-12-13 07:13:24.089355785 +0000 UTC m=+0.257194665 container died 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d609d25c4037f130793e222aa764b0d016d848c851a41fcedab4a7141a583e9-merged.mount: Deactivated successfully.
Dec 13 07:13:24 compute-0 podman[75201]: 2025-12-13 07:13:24.123822975 +0000 UTC m=+0.291661854 container remove 9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 07:13:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3310964443' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:24 compute-0 systemd[1]: libpod-conmon-9a02dd76d1d173632c97c7c720ca7bebed44ec2a5d63b0d230e40b2dba92d56e.scope: Deactivated successfully.
Dec 13 07:13:24 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'crash'
Dec 13 07:13:24 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'dashboard'
Dec 13 07:13:25 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'devicehealth'
Dec 13 07:13:25 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 07:13:25 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 07:13:25 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 07:13:25 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]:   from numpy import show_config as show_numpy_config
Dec 13 07:13:25 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'influx'
Dec 13 07:13:25 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'insights'
Dec 13 07:13:25 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'iostat'
Dec 13 07:13:25 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'k8sevents'
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'localpool'
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.166920586 +0000 UTC m=+0.026940862 container create d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 07:13:26 compute-0 systemd[1]: Started libpod-conmon-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope.
Dec 13 07:13:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.223948392 +0000 UTC m=+0.083968687 container init d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.228185147 +0000 UTC m=+0.088205423 container start d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.230543641 +0000 UTC m=+0.090563917 container attach d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.155887112 +0000 UTC m=+0.015907407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 07:13:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693822294' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]: 
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]: {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "health": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "status": "HEALTH_OK",
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "checks": {},
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "mutes": []
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "election_epoch": 5,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "quorum": [
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         0
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     ],
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "quorum_names": [
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "compute-0"
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     ],
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "quorum_age": 3,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "monmap": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "epoch": 1,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "min_mon_release_name": "tentacle",
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_mons": 1
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "osdmap": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "epoch": 1,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_osds": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_up_osds": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "osd_up_since": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_in_osds": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "osd_in_since": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_remapped_pgs": 0
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "pgmap": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "pgs_by_state": [],
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_pgs": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_pools": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_objects": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "data_bytes": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "bytes_used": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "bytes_avail": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "bytes_total": 0
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "fsmap": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "epoch": 1,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "by_rank": [],
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "up:standby": 0
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "mgrmap": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "available": false,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "num_standbys": 0,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "modules": [
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:             "iostat",
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:             "nfs"
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         ],
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "services": {}
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "servicemap": {
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "epoch": 1,
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:         "services": {}
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     },
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]:     "progress_events": {}
Dec 13 07:13:26 compute-0 ecstatic_mendel[75293]: }
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'mirroring'
Dec 13 07:13:26 compute-0 systemd[1]: libpod-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope: Deactivated successfully.
Dec 13 07:13:26 compute-0 conmon[75293]: conmon d2c1cb2be971b812a45f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope/container/memory.events
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.38864384 +0000 UTC m=+0.248664126 container died d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-58bdb0b39b4002c8f374d231d46267aa5ba90adaa9e2c5dc90178eab7d1e6857-merged.mount: Deactivated successfully.
Dec 13 07:13:26 compute-0 podman[75280]: 2025-12-13 07:13:26.413208122 +0000 UTC m=+0.273228398 container remove d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0 (image=quay.io/ceph/ceph:v20, name=ecstatic_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 07:13:26 compute-0 systemd[1]: libpod-conmon-d2c1cb2be971b812a45f5a2eb25b5cb98ec7d10becff10688f79f22c479546b0.scope: Deactivated successfully.
Dec 13 07:13:26 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3693822294' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'nfs'
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'orchestrator'
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 07:13:26 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'osd_support'
Dec 13 07:13:27 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 07:13:27 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'progress'
Dec 13 07:13:27 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'prometheus'
Dec 13 07:13:27 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'rbd_support'
Dec 13 07:13:27 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'rgw'
Dec 13 07:13:27 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'rook'
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'selftest'
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'smb'
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.453961726 +0000 UTC m=+0.024374607 container create 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:13:28 compute-0 systemd[1]: Started libpod-conmon-0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b.scope.
Dec 13 07:13:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.513207302 +0000 UTC m=+0.083620173 container init 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.517178378 +0000 UTC m=+0.087591248 container start 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.518206331 +0000 UTC m=+0.088619202 container attach 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.444764244 +0000 UTC m=+0.015177135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'snap_schedule'
Dec 13 07:13:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 07:13:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952436456' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]: 
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]: {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "health": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "status": "HEALTH_OK",
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "checks": {},
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "mutes": []
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "election_epoch": 5,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "quorum": [
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         0
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     ],
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "quorum_names": [
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "compute-0"
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     ],
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "quorum_age": 5,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "monmap": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "epoch": 1,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "min_mon_release_name": "tentacle",
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_mons": 1
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "osdmap": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "epoch": 1,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_osds": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_up_osds": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "osd_up_since": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_in_osds": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "osd_in_since": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_remapped_pgs": 0
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "pgmap": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "pgs_by_state": [],
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_pgs": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_pools": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_objects": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "data_bytes": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "bytes_used": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "bytes_avail": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "bytes_total": 0
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "fsmap": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "epoch": 1,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "by_rank": [],
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "up:standby": 0
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "mgrmap": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "available": false,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "num_standbys": 0,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "modules": [
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:             "iostat",
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:             "nfs"
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         ],
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "services": {}
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "servicemap": {
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "epoch": 1,
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:         "services": {}
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     },
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]:     "progress_events": {}
Dec 13 07:13:28 compute-0 blissful_engelbart[75343]: }
Dec 13 07:13:28 compute-0 systemd[1]: libpod-0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b.scope: Deactivated successfully.
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.670251916 +0000 UTC m=+0.240664787 container died 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'stats'
Dec 13 07:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4b1b4ccf953389f78d1c457320fd2224512e7a3c5bcc91f1b9960a2a279941-merged.mount: Deactivated successfully.
Dec 13 07:13:28 compute-0 podman[75330]: 2025-12-13 07:13:28.68854635 +0000 UTC m=+0.258959222 container remove 0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b (image=quay.io/ceph/ceph:v20, name=blissful_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:13:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3952436456' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:28 compute-0 systemd[1]: libpod-conmon-0c4eb6ba816b0ac6c0a6abe5f734bb712d08402fe900dc31507d44ff44ea486b.scope: Deactivated successfully.
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'status'
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'telegraf'
Dec 13 07:13:28 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'telemetry'
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'volumes'
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: ms_deliver_dispatch: unhandled message 0x5571dc4d9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qsherl
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr handle_mgr_map Activating!
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr handle_mgr_map I am now activating
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.qsherl(active, starting, since 0.00483231s)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Manager daemon compute-0.qsherl is now available
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: balancer
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [balancer INFO root] Starting
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: crash
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:13:29
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [balancer INFO root] No pools available
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: devicehealth
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [devicehealth INFO root] Starting
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: iostat
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: nfs
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: orchestrator
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: pg_autoscaler
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: progress
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [progress INFO root] Loading...
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [progress INFO root] No stored events to load
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [progress INFO root] Loaded [] historic events
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] recovery thread starting
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] starting setup
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: rbd_support
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: status
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: telemetry
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] PerfHandler: starting
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TaskHandler: starting
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: [rbd_support INFO root] setup complete
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:29 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: volumes
Dec 13 07:13:29 compute-0 ceph-mon[74928]: Activating manager daemon compute-0.qsherl
Dec 13 07:13:29 compute-0 ceph-mon[74928]: mgrmap e2: compute-0.qsherl(active, starting, since 0.00483231s)
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: Manager daemon compute-0.qsherl is now available
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:29 compute-0 ceph-mon[74928]: from='mgr.14102 192.168.122.100:0/329199589' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:30 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.qsherl(active, since 1.00935s)
Dec 13 07:13:30 compute-0 podman[75457]: 2025-12-13 07:13:30.734518707 +0000 UTC m=+0.028824262 container create 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 07:13:30 compute-0 systemd[1]: Started libpod-conmon-62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453.scope.
Dec 13 07:13:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:30 compute-0 podman[75457]: 2025-12-13 07:13:30.775730037 +0000 UTC m=+0.070035582 container init 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:30 compute-0 podman[75457]: 2025-12-13 07:13:30.779679072 +0000 UTC m=+0.073984616 container start 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:30 compute-0 podman[75457]: 2025-12-13 07:13:30.780759472 +0000 UTC m=+0.075065017 container attach 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:30 compute-0 podman[75457]: 2025-12-13 07:13:30.721895454 +0000 UTC m=+0.016201019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 07:13:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779458911' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]: 
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]: {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "health": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "status": "HEALTH_OK",
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "checks": {},
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "mutes": []
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "election_epoch": 5,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "quorum": [
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         0
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     ],
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "quorum_names": [
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "compute-0"
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     ],
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "quorum_age": 8,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "monmap": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "epoch": 1,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "min_mon_release_name": "tentacle",
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_mons": 1
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "osdmap": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "epoch": 1,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_osds": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_up_osds": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "osd_up_since": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_in_osds": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "osd_in_since": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_remapped_pgs": 0
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "pgmap": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "pgs_by_state": [],
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_pgs": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_pools": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_objects": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "data_bytes": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "bytes_used": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "bytes_avail": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "bytes_total": 0
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "fsmap": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "epoch": 1,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "btime": "2025-12-13T07:13:21:319345+0000",
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "by_rank": [],
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "up:standby": 0
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "mgrmap": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "available": true,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "num_standbys": 0,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "modules": [
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:             "iostat",
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:             "nfs"
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         ],
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "services": {}
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "servicemap": {
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "epoch": 1,
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "modified": "2025-12-13T07:13:21.320643+0000",
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:         "services": {}
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     },
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]:     "progress_events": {}
Dec 13 07:13:31 compute-0 busy_heisenberg[75470]: }
Dec 13 07:13:31 compute-0 systemd[1]: libpod-62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453.scope: Deactivated successfully.
Dec 13 07:13:31 compute-0 podman[75497]: 2025-12-13 07:13:31.199339751 +0000 UTC m=+0.014690350 container died 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-98036d5ff6ff5a5ea36863de1b3deffd92dd0110f6946c158a80b7e37a871da2-merged.mount: Deactivated successfully.
Dec 13 07:13:31 compute-0 podman[75497]: 2025-12-13 07:13:31.215112906 +0000 UTC m=+0.030463485 container remove 62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453 (image=quay.io/ceph/ceph:v20, name=busy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 07:13:31 compute-0 systemd[1]: libpod-conmon-62ddf3c3ca79cb43a49efbf89a7992724d93a071168c5defde18cd99a1299453.scope: Deactivated successfully.
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.255570058 +0000 UTC m=+0.024169891 container create fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:13:31 compute-0 systemd[1]: Started libpod-conmon-fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b.scope.
Dec 13 07:13:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.317307558 +0000 UTC m=+0.085907402 container init fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.320900854 +0000 UTC m=+0.089500688 container start fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.321874935 +0000 UTC m=+0.090474770 container attach fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.245729577 +0000 UTC m=+0.014329431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:31 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:31 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.qsherl(active, since 2s)
Dec 13 07:13:31 compute-0 ceph-mon[74928]: mgrmap e3: compute-0.qsherl(active, since 1.00935s)
Dec 13 07:13:31 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/779458911' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 07:13:31 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 07:13:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1106528978' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 07:13:31 compute-0 youthful_panini[75521]: 
Dec 13 07:13:31 compute-0 youthful_panini[75521]: [global]
Dec 13 07:13:31 compute-0 youthful_panini[75521]:         fsid = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:31 compute-0 youthful_panini[75521]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 07:13:31 compute-0 youthful_panini[75521]:         osd_crush_chooseleaf_type = 0
Dec 13 07:13:31 compute-0 systemd[1]: libpod-fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b.scope: Deactivated successfully.
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.630256323 +0000 UTC m=+0.398856167 container died fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 07:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0451bc72b7e8a54f7f698e8b0b0f8e21f07a43d2c43466c66b9d35617e03747-merged.mount: Deactivated successfully.
Dec 13 07:13:31 compute-0 podman[75508]: 2025-12-13 07:13:31.647006204 +0000 UTC m=+0.415606038 container remove fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b (image=quay.io/ceph/ceph:v20, name=youthful_panini, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:13:31 compute-0 systemd[1]: libpod-conmon-fff1a7f0181425627aa01d65a9467e734178729f7295447762a415918cc8638b.scope: Deactivated successfully.
Dec 13 07:13:31 compute-0 podman[75555]: 2025-12-13 07:13:31.684766005 +0000 UTC m=+0.024470697 container create dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:31 compute-0 systemd[1]: Started libpod-conmon-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope.
Dec 13 07:13:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:31 compute-0 podman[75555]: 2025-12-13 07:13:31.725568767 +0000 UTC m=+0.065273479 container init dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:13:31 compute-0 podman[75555]: 2025-12-13 07:13:31.729307917 +0000 UTC m=+0.069012619 container start dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:31 compute-0 podman[75555]: 2025-12-13 07:13:31.73040006 +0000 UTC m=+0.070104752 container attach dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 07:13:31 compute-0 podman[75555]: 2025-12-13 07:13:31.674696444 +0000 UTC m=+0.014401156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 13 07:13:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 07:13:32 compute-0 ceph-mon[74928]: mgrmap e4: compute-0.qsherl(active, since 2s)
Dec 13 07:13:32 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1106528978' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 07:13:32 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 07:13:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 07:13:32 compute-0 ceph-mgr[75200]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 13 07:13:32 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.qsherl(active, since 3s)
Dec 13 07:13:32 compute-0 systemd[1]: libpod-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope: Deactivated successfully.
Dec 13 07:13:32 compute-0 conmon[75568]: conmon dd89e672ba1e2fa098b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope/container/memory.events
Dec 13 07:13:32 compute-0 podman[75555]: 2025-12-13 07:13:32.502146605 +0000 UTC m=+0.841851297 container died dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f23360571854d379264998ac9afb4ecd652a173a6b2b25b7befaabad3375576b-merged.mount: Deactivated successfully.
Dec 13 07:13:32 compute-0 podman[75555]: 2025-12-13 07:13:32.526917547 +0000 UTC m=+0.866622239 container remove dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d (image=quay.io/ceph/ceph:v20, name=amazing_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:32 compute-0 systemd[1]: libpod-conmon-dd89e672ba1e2fa098b9036ff8a90e16b5ee466ed6f52eb078e67cf7d148626d.scope: Deactivated successfully.
Dec 13 07:13:32 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: ignoring --setuser ceph since I am not root
Dec 13 07:13:32 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: ignoring --setgroup ceph since I am not root
Dec 13 07:13:32 compute-0 ceph-mgr[75200]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 07:13:32 compute-0 ceph-mgr[75200]: pidfile_write: ignore empty --pid-file
Dec 13 07:13:32 compute-0 podman[75604]: 2025-12-13 07:13:32.579375361 +0000 UTC m=+0.032000895 container create 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:13:32 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'alerts'
Dec 13 07:13:32 compute-0 systemd[1]: Started libpod-conmon-38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242.scope.
Dec 13 07:13:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:32 compute-0 podman[75604]: 2025-12-13 07:13:32.630812566 +0000 UTC m=+0.083438101 container init 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:13:32 compute-0 podman[75604]: 2025-12-13 07:13:32.634671191 +0000 UTC m=+0.087296715 container start 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:13:32 compute-0 podman[75604]: 2025-12-13 07:13:32.635597052 +0000 UTC m=+0.088222576 container attach 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:13:32 compute-0 podman[75604]: 2025-12-13 07:13:32.566223474 +0000 UTC m=+0.018849018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:32 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'balancer'
Dec 13 07:13:32 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'cephadm'
Dec 13 07:13:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 07:13:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121678077' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 07:13:32 compute-0 trusting_hertz[75637]: {
Dec 13 07:13:32 compute-0 trusting_hertz[75637]:     "epoch": 5,
Dec 13 07:13:32 compute-0 trusting_hertz[75637]:     "available": true,
Dec 13 07:13:32 compute-0 trusting_hertz[75637]:     "active_name": "compute-0.qsherl",
Dec 13 07:13:32 compute-0 trusting_hertz[75637]:     "num_standby": 0
Dec 13 07:13:32 compute-0 trusting_hertz[75637]: }
Dec 13 07:13:33 compute-0 systemd[1]: libpod-38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242.scope: Deactivated successfully.
Dec 13 07:13:33 compute-0 podman[75663]: 2025-12-13 07:13:33.036328743 +0000 UTC m=+0.017154283 container died 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-67742d6d53a4a08afeda6ff472d92059dfab7f6469a989995bc5cfbdf84fe12e-merged.mount: Deactivated successfully.
Dec 13 07:13:33 compute-0 podman[75663]: 2025-12-13 07:13:33.05463475 +0000 UTC m=+0.035460290 container remove 38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242 (image=quay.io/ceph/ceph:v20, name=trusting_hertz, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:33 compute-0 systemd[1]: libpod-conmon-38c37013b84f3f7d75907f84b7fdf360591ea4366174164ffee834ca82fd7242.scope: Deactivated successfully.
Dec 13 07:13:33 compute-0 podman[75677]: 2025-12-13 07:13:33.102310314 +0000 UTC m=+0.028954809 container create 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:33 compute-0 systemd[1]: Started libpod-conmon-7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2.scope.
Dec 13 07:13:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:33 compute-0 podman[75677]: 2025-12-13 07:13:33.160067521 +0000 UTC m=+0.086712005 container init 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:33 compute-0 podman[75677]: 2025-12-13 07:13:33.164490175 +0000 UTC m=+0.091134661 container start 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:13:33 compute-0 podman[75677]: 2025-12-13 07:13:33.165512919 +0000 UTC m=+0.092157403 container attach 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:13:33 compute-0 podman[75677]: 2025-12-13 07:13:33.091599487 +0000 UTC m=+0.018243972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:33 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'crash'
Dec 13 07:13:33 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'dashboard'
Dec 13 07:13:33 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3001210130' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 07:13:33 compute-0 ceph-mon[74928]: mgrmap e5: compute-0.qsherl(active, since 3s)
Dec 13 07:13:33 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4121678077' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'devicehealth'
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 07:13:34 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 07:13:34 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 07:13:34 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]:   from numpy import show_config as show_numpy_config
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'influx'
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'insights'
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'iostat'
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'k8sevents'
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'localpool'
Dec 13 07:13:34 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'mirroring'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'nfs'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'orchestrator'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'osd_support'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'progress'
Dec 13 07:13:35 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'prometheus'
Dec 13 07:13:36 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'rbd_support'
Dec 13 07:13:36 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'rgw'
Dec 13 07:13:36 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'rook'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'selftest'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'smb'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'snap_schedule'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'stats'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'status'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'telegraf'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'telemetry'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 07:13:37 compute-0 ceph-mgr[75200]: mgr[py] Loading python module 'volumes'
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qsherl restarted
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qsherl
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: ms_deliver_dispatch: unhandled message 0x55cf1656c000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: mgr handle_mgr_map Activating!
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: mgr handle_mgr_map I am now activating
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.qsherl(active, starting, since 0.00581274s)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} v 0)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: balancer
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Starting
Dec 13 07:13:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Manager daemon compute-0.qsherl is now available
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:13:38
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:13:38 compute-0 ceph-mgr[75200]: [balancer INFO root] No pools available
Dec 13 07:13:38 compute-0 ceph-mon[74928]: Active manager daemon compute-0.qsherl restarted
Dec 13 07:13:38 compute-0 ceph-mon[74928]: Activating manager daemon compute-0.qsherl
Dec 13 07:13:38 compute-0 ceph-mon[74928]: osdmap e2: 0 total, 0 up, 0 in
Dec 13 07:13:38 compute-0 ceph-mon[74928]: mgrmap e6: compute-0.qsherl(active, starting, since 0.00581274s)
Dec 13 07:13:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr metadata", "who": "compute-0.qsherl", "id": "compute-0.qsherl"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 07:13:38 compute-0 ceph-mon[74928]: Manager daemon compute-0.qsherl is now available
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: cephadm
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: crash
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: devicehealth
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [devicehealth INFO root] Starting
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: iostat
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: nfs
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: orchestrator
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: pg_autoscaler
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: progress
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [progress INFO root] Loading...
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [progress INFO root] No stored events to load
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [progress INFO root] Loaded [] historic events
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] recovery thread starting
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] starting setup
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: rbd_support
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: status
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: telemetry
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] PerfHandler: starting
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TaskHandler: starting
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] setup complete
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: mgr load Constructed class from module: volumes
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.qsherl(active, since 1.0081s)
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 13 07:13:39 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 13 07:13:39 compute-0 wizardly_roentgen[75699]: {
Dec 13 07:13:39 compute-0 wizardly_roentgen[75699]:     "mgrmap_epoch": 7,
Dec 13 07:13:39 compute-0 wizardly_roentgen[75699]:     "initialized": true
Dec 13 07:13:39 compute-0 wizardly_roentgen[75699]: }
Dec 13 07:13:39 compute-0 systemd[1]: libpod-7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2.scope: Deactivated successfully.
Dec 13 07:13:39 compute-0 podman[75677]: 2025-12-13 07:13:39.206526831 +0000 UTC m=+6.133171315 container died 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 07:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a25372e05e47285013215b462a66482a8dc0ff2105418fa078ec64a5b236423-merged.mount: Deactivated successfully.
Dec 13 07:13:39 compute-0 podman[75677]: 2025-12-13 07:13:39.231300537 +0000 UTC m=+6.157945022 container remove 7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2 (image=quay.io/ceph/ceph:v20, name=wizardly_roentgen, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:13:39 compute-0 systemd[1]: libpod-conmon-7b7cad99209a3b1fee269c39cb074dfb803877027f9276257e969e59db87c9a2.scope: Deactivated successfully.
Dec 13 07:13:39 compute-0 podman[75842]: 2025-12-13 07:13:39.276251577 +0000 UTC m=+0.030227331 container create 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:13:39 compute-0 systemd[1]: Started libpod-conmon-72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc.scope.
Dec 13 07:13:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:39 compute-0 podman[75842]: 2025-12-13 07:13:39.323663936 +0000 UTC m=+0.077639709 container init 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:13:39 compute-0 podman[75842]: 2025-12-13 07:13:39.32790542 +0000 UTC m=+0.081881173 container start 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 07:13:39 compute-0 podman[75842]: 2025-12-13 07:13:39.329000028 +0000 UTC m=+0.082975801 container attach 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:13:39 compute-0 podman[75842]: 2025-12-13 07:13:39.264822761 +0000 UTC m=+0.018798534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 13 07:13:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:40 compute-0 ceph-mon[74928]: Found migration_current of "None". Setting to last migration.
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/mirror_snapshot_schedule"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qsherl/trash_purge_schedule"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: mgrmap e7: compute-0.qsherl(active, since 1.0081s)
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 07:13:40 compute-0 friendly_moore[75857]: module 'orchestrator' is already enabled (always-on)
Dec 13 07:13:40 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.qsherl(active, since 2s)
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:40 compute-0 systemd[1]: libpod-72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc.scope: Deactivated successfully.
Dec 13 07:13:40 compute-0 podman[75842]: 2025-12-13 07:13:40.199701185 +0000 UTC m=+0.953676938 container died 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bf8a4748430898131f0a24444b9f30d20ba8ff9a86417fdf77b477ac3f18b7-merged.mount: Deactivated successfully.
Dec 13 07:13:40 compute-0 podman[75842]: 2025-12-13 07:13:40.229895432 +0000 UTC m=+0.983871185 container remove 72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc (image=quay.io/ceph/ceph:v20, name=friendly_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:40 compute-0 systemd[1]: libpod-conmon-72da5cc3dd0c213ae66d8337e0c5a03127dd8ac40d60f17030a52076271c0bcc.scope: Deactivated successfully.
Dec 13 07:13:40 compute-0 podman[75894]: 2025-12-13 07:13:40.288176855 +0000 UTC m=+0.044136271 container create 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:40 compute-0 systemd[1]: Started libpod-conmon-7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e.scope.
Dec 13 07:13:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:40 compute-0 podman[75894]: 2025-12-13 07:13:40.344468617 +0000 UTC m=+0.100428043 container init 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Bus STARTING
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Bus STARTING
Dec 13 07:13:40 compute-0 podman[75894]: 2025-12-13 07:13:40.348179695 +0000 UTC m=+0.104139110 container start 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:13:40 compute-0 podman[75894]: 2025-12-13 07:13:40.349128278 +0000 UTC m=+0.105087693 container attach 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:13:40 compute-0 podman[75894]: 2025-12-13 07:13:40.261666914 +0000 UTC m=+0.017626340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Serving on https://192.168.122.100:7150
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Serving on https://192.168.122.100:7150
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Client ('192.168.122.100', 58058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Client ('192.168.122.100', 58058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Serving on http://192.168.122.100:8765
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Serving on http://192.168.122.100:8765
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:13:40] ENGINE Bus STARTED
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:13:40] ENGINE Bus STARTED
Dec 13 07:13:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 07:13:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:40 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 13 07:13:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 07:13:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:40 compute-0 systemd[1]: libpod-7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e.scope: Deactivated successfully.
Dec 13 07:13:40 compute-0 podman[75956]: 2025-12-13 07:13:40.707369175 +0000 UTC m=+0.016763619 container died 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ea907f8c6f93456d77ac6861004f854dae845d4252a877191d4c9bbf8bcd7b7-merged.mount: Deactivated successfully.
Dec 13 07:13:40 compute-0 podman[75956]: 2025-12-13 07:13:40.722023997 +0000 UTC m=+0.031418441 container remove 7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e (image=quay.io/ceph/ceph:v20, name=beautiful_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:13:40 compute-0 systemd[1]: libpod-conmon-7d1d7ffdecd4c1414bbf71569d441acbbcdf592b24e66d20e92f916e52638e7e.scope: Deactivated successfully.
Dec 13 07:13:40 compute-0 podman[75968]: 2025-12-13 07:13:40.762234625 +0000 UTC m=+0.024221979 container create 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:40 compute-0 systemd[1]: Started libpod-conmon-67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba.scope.
Dec 13 07:13:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:40 compute-0 podman[75968]: 2025-12-13 07:13:40.808155189 +0000 UTC m=+0.070142564 container init 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 07:13:40 compute-0 podman[75968]: 2025-12-13 07:13:40.811887927 +0000 UTC m=+0.073875281 container start 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:40 compute-0 podman[75968]: 2025-12-13 07:13:40.812943652 +0000 UTC m=+0.074931026 container attach 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:13:40 compute-0 podman[75968]: 2025-12-13 07:13:40.752819013 +0000 UTC m=+0.014806387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 13 07:13:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_user
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 13 07:13:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 13 07:13:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_config
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 13 07:13:41 compute-0 pensive_banzai[75981]: ssh user set to ceph-admin. sudo will be used
Dec 13 07:13:41 compute-0 systemd[1]: libpod-67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba.scope: Deactivated successfully.
Dec 13 07:13:41 compute-0 podman[75968]: 2025-12-13 07:13:41.130790305 +0000 UTC m=+0.392777659 container died 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 07:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-55c48038ca479d4e5298459e28dd90f45324dcbd981f1c9edfe0119e8a35a30e-merged.mount: Deactivated successfully.
Dec 13 07:13:41 compute-0 podman[75968]: 2025-12-13 07:13:41.146727608 +0000 UTC m=+0.408714962 container remove 67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba (image=quay.io/ceph/ceph:v20, name=pensive_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:13:41 compute-0 systemd[1]: libpod-conmon-67a57ff5cdb6c7fc69f1984da93db2f67beef781c2f3ced97799fab594c61eba.scope: Deactivated successfully.
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.184366281 +0000 UTC m=+0.025123234 container create 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 07:13:41 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/699535926' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 07:13:41 compute-0 ceph-mon[74928]: mgrmap e8: compute-0.qsherl(active, since 2s)
Dec 13 07:13:41 compute-0 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Bus STARTING
Dec 13 07:13:41 compute-0 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Serving on https://192.168.122.100:7150
Dec 13 07:13:41 compute-0 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Client ('192.168.122.100', 58058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 07:13:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 systemd[1]: Started libpod-conmon-3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972.scope.
Dec 13 07:13:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.224063956 +0000 UTC m=+0.064820929 container init 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.230485378 +0000 UTC m=+0.071242332 container start 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.231868619 +0000 UTC m=+0.072625572 container attach 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.174346143 +0000 UTC m=+0.015103106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 13 07:13:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [cephadm INFO root] Set ssh private key
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 13 07:13:41 compute-0 systemd[1]: libpod-3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972.scope: Deactivated successfully.
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.550704612 +0000 UTC m=+0.391461565 container died 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-41dba9670584cb2397acde0de69c4c3a8386aa90b8880c8018af3968d0935177-merged.mount: Deactivated successfully.
Dec 13 07:13:41 compute-0 podman[76016]: 2025-12-13 07:13:41.569465284 +0000 UTC m=+0.410222237 container remove 3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972 (image=quay.io/ceph/ceph:v20, name=great_banzai, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 07:13:41 compute-0 systemd[1]: libpod-conmon-3f920aada7861a4b3f69940a11b9bc1ecf13f1127d1d382d18db57a4ebada972.scope: Deactivated successfully.
Dec 13 07:13:41 compute-0 podman[76065]: 2025-12-13 07:13:41.607467049 +0000 UTC m=+0.025229875 container create 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:41 compute-0 systemd[1]: Started libpod-conmon-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope.
Dec 13 07:13:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:41 compute-0 podman[76065]: 2025-12-13 07:13:41.64854064 +0000 UTC m=+0.066303466 container init 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:13:41 compute-0 podman[76065]: 2025-12-13 07:13:41.653177538 +0000 UTC m=+0.070940354 container start 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:41 compute-0 podman[76065]: 2025-12-13 07:13:41.654337619 +0000 UTC m=+0.072100445 container attach 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:13:41 compute-0 podman[76065]: 2025-12-13 07:13:41.596612763 +0000 UTC m=+0.014375599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 13 07:13:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 13 07:13:41 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 13 07:13:41 compute-0 systemd[1]: libpod-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope: Deactivated successfully.
Dec 13 07:13:41 compute-0 conmon[76080]: conmon 83ff7c1acdea74892cee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope/container/memory.events
Dec 13 07:13:41 compute-0 podman[76106]: 2025-12-13 07:13:41.998611457 +0000 UTC m=+0.015162017 container died 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e80302afd7e483cfbeb4a67c5300e614b7907d1f5c3726db2236bd9dc504c1e6-merged.mount: Deactivated successfully.
Dec 13 07:13:42 compute-0 podman[76106]: 2025-12-13 07:13:42.015950155 +0000 UTC m=+0.032500695 container remove 83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79 (image=quay.io/ceph/ceph:v20, name=amazing_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:13:42 compute-0 systemd[1]: libpod-conmon-83ff7c1acdea74892cee76a7823c21b91697ef4ce1a1befd2f5f0f65b5c6cd79.scope: Deactivated successfully.
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.058797611 +0000 UTC m=+0.025612003 container create 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:42 compute-0 systemd[1]: Started libpod-conmon-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope.
Dec 13 07:13:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.099166948 +0000 UTC m=+0.065981350 container init 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.103400427 +0000 UTC m=+0.070214818 container start 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.104445622 +0000 UTC m=+0.071260014 container attach 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.048511262 +0000 UTC m=+0.015325674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:42 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:42 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:42 compute-0 zen_chebyshev[76131]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlgKQw8WAbDTWUzeHJ5nqECv2AJbxtodMCfaa9/+87rAkbcBDDM+m2+wp4TmMHAGS1DSLXP1DQIuhhLRxztl/Gysfxve+g7QaHj+gk7fnWVCy+QBJ911iNvmhQP30hFsuaGnE33QpjQgzPKNppCtTvHw4/C26IUbp4X+TAfhW8CmYaeWBJI0fm9uOZnGmMO0YycxGwtjKDj6jqy2Vmab1EtnFv6N/SM1eHaViS9EcwvOLyOF5ogBiL2RRMJHZ89GA4I3c2T2jaujU2X/TKH7lkhQ60CISQyPgyeZyYmXP4IX1iEySEWn7dkw5Pd7w8ZgrNuRejDe6BSDQGJDncCTq2Mue7LQ61JvIWUMD8YyqeKRnuQbSNeFVLTvmz9wO/1s532/3VDT75RUvWV9+OJd3/jbC5DtwRHYXrSH5yqNBZtB9q4q9TCIerhf5M6m341RyepNF5sf32n0ocRhNhI48E0EK5g/XvihxelmvRp/Wtldy7Ne87BaDFwIkdKRRzuRM= zuul@controller
Dec 13 07:13:42 compute-0 systemd[1]: libpod-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope: Deactivated successfully.
Dec 13 07:13:42 compute-0 conmon[76131]: conmon 6e4e34c54eea083c9d43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope/container/memory.events
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.419938349 +0000 UTC m=+0.386752741 container died 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c852501fe91991baa1a0c77a870f86632bc865707cbe965468914e4c81f11d2-merged.mount: Deactivated successfully.
Dec 13 07:13:42 compute-0 podman[76118]: 2025-12-13 07:13:42.438339775 +0000 UTC m=+0.405154167 container remove 6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0 (image=quay.io/ceph/ceph:v20, name=zen_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 07:13:42 compute-0 systemd[1]: libpod-conmon-6e4e34c54eea083c9d4375b77b584551780e332ce6a5b0c50d6e5e990e6f8ea0.scope: Deactivated successfully.
Dec 13 07:13:42 compute-0 podman[76166]: 2025-12-13 07:13:42.478662565 +0000 UTC m=+0.025843308 container create fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 07:13:42 compute-0 systemd[1]: Started libpod-conmon-fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5.scope.
Dec 13 07:13:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:42 compute-0 podman[76166]: 2025-12-13 07:13:42.526357895 +0000 UTC m=+0.073538628 container init fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 07:13:42 compute-0 podman[76166]: 2025-12-13 07:13:42.5306846 +0000 UTC m=+0.077865333 container start fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:13:42 compute-0 podman[76166]: 2025-12-13 07:13:42.531754562 +0000 UTC m=+0.078935294 container attach fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:42 compute-0 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Serving on http://192.168.122.100:8765
Dec 13 07:13:42 compute-0 ceph-mon[74928]: [13/Dec/2025:07:13:40] ENGINE Bus STARTED
Dec 13 07:13:42 compute-0 ceph-mon[74928]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:42 compute-0 ceph-mon[74928]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:42 compute-0 ceph-mon[74928]: Set ssh ssh_user
Dec 13 07:13:42 compute-0 ceph-mon[74928]: Set ssh ssh_config
Dec 13 07:13:42 compute-0 ceph-mon[74928]: ssh user set to ceph-admin. sudo will be used
Dec 13 07:13:42 compute-0 ceph-mon[74928]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:42 compute-0 ceph-mon[74928]: Set ssh ssh_identity_key
Dec 13 07:13:42 compute-0 ceph-mon[74928]: Set ssh private key
Dec 13 07:13:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:42 compute-0 podman[76166]: 2025-12-13 07:13:42.468385564 +0000 UTC m=+0.015566297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019901112 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:13:42 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:43 compute-0 sshd-session[76206]: Accepted publickey for ceph-admin from 192.168.122.100 port 41978 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:43 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 07:13:43 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 07:13:43 compute-0 systemd-logind[745]: New session 20 of user ceph-admin.
Dec 13 07:13:43 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 07:13:43 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:43 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 13 07:13:43 compute-0 systemd[76210]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:43 compute-0 systemd[76210]: Queued start job for default target Main User Target.
Dec 13 07:13:43 compute-0 systemd[76210]: Created slice User Application Slice.
Dec 13 07:13:43 compute-0 systemd[76210]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 07:13:43 compute-0 systemd[76210]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 07:13:43 compute-0 systemd[76210]: Reached target Paths.
Dec 13 07:13:43 compute-0 systemd[76210]: Reached target Timers.
Dec 13 07:13:43 compute-0 systemd[76210]: Starting D-Bus User Message Bus Socket...
Dec 13 07:13:43 compute-0 systemd[76210]: Starting Create User's Volatile Files and Directories...
Dec 13 07:13:43 compute-0 systemd[76210]: Finished Create User's Volatile Files and Directories.
Dec 13 07:13:43 compute-0 systemd[76210]: Listening on D-Bus User Message Bus Socket.
Dec 13 07:13:43 compute-0 systemd[76210]: Reached target Sockets.
Dec 13 07:13:43 compute-0 systemd[76210]: Reached target Basic System.
Dec 13 07:13:43 compute-0 systemd[76210]: Reached target Main User Target.
Dec 13 07:13:43 compute-0 systemd[76210]: Startup finished in 87ms.
Dec 13 07:13:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 13 07:13:43 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Dec 13 07:13:43 compute-0 sshd-session[76206]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:43 compute-0 sshd-session[76223]: Accepted publickey for ceph-admin from 192.168.122.100 port 41994 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:43 compute-0 systemd-logind[745]: New session 22 of user ceph-admin.
Dec 13 07:13:43 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Dec 13 07:13:43 compute-0 sshd-session[76223]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:43 compute-0 sudo[76229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:43 compute-0 sudo[76229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:43 compute-0 sudo[76229]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:43 compute-0 sshd-session[76254]: Accepted publickey for ceph-admin from 192.168.122.100 port 42002 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:43 compute-0 systemd-logind[745]: New session 23 of user ceph-admin.
Dec 13 07:13:43 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 13 07:13:43 compute-0 sshd-session[76254]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:43 compute-0 ceph-mon[74928]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:43 compute-0 ceph-mon[74928]: Set ssh ssh_identity_pub
Dec 13 07:13:43 compute-0 ceph-mon[74928]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:43 compute-0 sudo[76258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 13 07:13:43 compute-0 sudo[76258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:43 compute-0 sudo[76258]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:43 compute-0 sshd-session[76283]: Accepted publickey for ceph-admin from 192.168.122.100 port 42014 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:43 compute-0 systemd-logind[745]: New session 24 of user ceph-admin.
Dec 13 07:13:43 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 13 07:13:43 compute-0 sshd-session[76283]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:43 compute-0 sudo[76287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 13 07:13:43 compute-0 sudo[76287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:43 compute-0 sudo[76287]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:43 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 13 07:13:43 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 13 07:13:43 compute-0 sshd-session[76312]: Accepted publickey for ceph-admin from 192.168.122.100 port 42026 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:43 compute-0 systemd-logind[745]: New session 25 of user ceph-admin.
Dec 13 07:13:44 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 13 07:13:44 compute-0 sshd-session[76312]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:44 compute-0 sudo[76316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:44 compute-0 sudo[76316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:44 compute-0 sudo[76316]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:44 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:44 compute-0 sshd-session[76341]: Accepted publickey for ceph-admin from 192.168.122.100 port 42038 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:44 compute-0 systemd-logind[745]: New session 26 of user ceph-admin.
Dec 13 07:13:44 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 13 07:13:44 compute-0 sshd-session[76341]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:44 compute-0 sudo[76345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:44 compute-0 sudo[76345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:44 compute-0 sudo[76345]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:44 compute-0 ceph-mon[74928]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:44 compute-0 sshd-session[76370]: Accepted publickey for ceph-admin from 192.168.122.100 port 42042 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:44 compute-0 systemd-logind[745]: New session 27 of user ceph-admin.
Dec 13 07:13:44 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 13 07:13:44 compute-0 sshd-session[76370]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:44 compute-0 sudo[76374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 13 07:13:44 compute-0 sudo[76374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:44 compute-0 sudo[76374]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:44 compute-0 sshd-session[76399]: Accepted publickey for ceph-admin from 192.168.122.100 port 42050 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:44 compute-0 systemd-logind[745]: New session 28 of user ceph-admin.
Dec 13 07:13:44 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 13 07:13:44 compute-0 sshd-session[76399]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:44 compute-0 sudo[76403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:44 compute-0 sudo[76403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:44 compute-0 sudo[76403]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:45 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:45 compute-0 sshd-session[76428]: Accepted publickey for ceph-admin from 192.168.122.100 port 42062 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:45 compute-0 systemd-logind[745]: New session 29 of user ceph-admin.
Dec 13 07:13:45 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 13 07:13:45 compute-0 sshd-session[76428]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:45 compute-0 sudo[76432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 13 07:13:45 compute-0 sudo[76432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:45 compute-0 sudo[76432]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:45 compute-0 sshd-session[76457]: Accepted publickey for ceph-admin from 192.168.122.100 port 42066 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:45 compute-0 systemd-logind[745]: New session 30 of user ceph-admin.
Dec 13 07:13:45 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 13 07:13:45 compute-0 sshd-session[76457]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:45 compute-0 ceph-mon[74928]: Deploying cephadm binary to compute-0
Dec 13 07:13:46 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:46 compute-0 sshd-session[76484]: Accepted publickey for ceph-admin from 192.168.122.100 port 42082 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:46 compute-0 systemd-logind[745]: New session 31 of user ceph-admin.
Dec 13 07:13:46 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 13 07:13:46 compute-0 sshd-session[76484]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:46 compute-0 sudo[76488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 13 07:13:46 compute-0 sudo[76488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:46 compute-0 sudo[76488]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:46 compute-0 sshd-session[76513]: Accepted publickey for ceph-admin from 192.168.122.100 port 42092 ssh2: RSA SHA256:J+iY/Xk1As4KxUf+MyDLOArqGcssspUqj4qMMnYVAIw
Dec 13 07:13:46 compute-0 systemd-logind[745]: New session 32 of user ceph-admin.
Dec 13 07:13:46 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 13 07:13:46 compute-0 sshd-session[76513]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 07:13:46 compute-0 sudo[76517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 13 07:13:46 compute-0 sudo[76517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:46 compute-0 sudo[76517]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 07:13:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:46 compute-0 ceph-mgr[75200]: [cephadm INFO root] Added host compute-0
Dec 13 07:13:46 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 07:13:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 07:13:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:47 compute-0 stupefied_sammet[76180]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 07:13:47 compute-0 systemd[1]: libpod-fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5.scope: Deactivated successfully.
Dec 13 07:13:47 compute-0 podman[76166]: 2025-12-13 07:13:47.018214057 +0000 UTC m=+4.565394790 container died fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b6c4f373782b5ba3c83a613429540f38cb16e7b054bbee5e74403dc151e834-merged.mount: Deactivated successfully.
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:47 compute-0 sudo[76559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:47 compute-0 podman[76166]: 2025-12-13 07:13:47.044633878 +0000 UTC m=+4.591814611 container remove fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5 (image=quay.io/ceph/ceph:v20, name=stupefied_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 07:13:47 compute-0 sudo[76559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:47 compute-0 sudo[76559]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:47 compute-0 systemd[1]: libpod-conmon-fe96fa435b331d9ee9e156957224e3931b814f9f393346102e47d8be70a8b1d5.scope: Deactivated successfully.
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.089481022 +0000 UTC m=+0.027002968 container create 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:13:47 compute-0 sudo[76594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Dec 13 07:13:47 compute-0 sudo[76594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:47 compute-0 systemd[1]: Started libpod-conmon-292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7.scope.
Dec 13 07:13:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.161588681 +0000 UTC m=+0.099110626 container init 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.166767707 +0000 UTC m=+0.104289662 container start 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.16805088 +0000 UTC m=+0.105572845 container attach 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.078186388 +0000 UTC m=+0.015708354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 13 07:13:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 07:13:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:47 compute-0 vigorous_grothendieck[76635]: Scheduled mon update...
Dec 13 07:13:47 compute-0 systemd[1]: libpod-292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7.scope: Deactivated successfully.
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.507745529 +0000 UTC m=+0.445267474 container died 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd2d0a777d4b251de4f739e4e6828a4b2d568d928d1333bb959eae095026ce44-merged.mount: Deactivated successfully.
Dec 13 07:13:47 compute-0 podman[76593]: 2025-12-13 07:13:47.526361187 +0000 UTC m=+0.463883132 container remove 292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7 (image=quay.io/ceph/ceph:v20, name=vigorous_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 07:13:47 compute-0 systemd[1]: libpod-conmon-292cec4b12eb3f98922d3a4b3d47b8da94f85232a1ebbe0940e5d565ee3cb7d7.scope: Deactivated successfully.
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.564464595 +0000 UTC m=+0.024819332 container create 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:47 compute-0 systemd[1]: Started libpod-conmon-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope.
Dec 13 07:13:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.614681125 +0000 UTC m=+0.075035873 container init 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.619071919 +0000 UTC m=+0.079426658 container start 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.620078482 +0000 UTC m=+0.080433240 container attach 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.554788022 +0000 UTC m=+0.015142770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052609 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:13:47 compute-0 podman[76670]: 2025-12-13 07:13:47.860038733 +0000 UTC m=+0.567290535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:47 compute-0 podman[76740]: 2025-12-13 07:13:47.927109872 +0000 UTC m=+0.024311598 container create 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 13 07:13:47 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 13 07:13:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 07:13:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:47 compute-0 amazing_mirzakhani[76706]: Scheduled mgr update...
Dec 13 07:13:47 compute-0 systemd[1]: Started libpod-conmon-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope.
Dec 13 07:13:47 compute-0 systemd[1]: libpod-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope: Deactivated successfully.
Dec 13 07:13:47 compute-0 conmon[76706]: conmon 2667b3d5af6d0cec7731 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope/container/memory.events
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.958933963 +0000 UTC m=+0.419288701 container died 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:13:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:47 compute-0 podman[76740]: 2025-12-13 07:13:47.971119172 +0000 UTC m=+0.068320907 container init 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:13:47 compute-0 podman[76740]: 2025-12-13 07:13:47.97510181 +0000 UTC m=+0.072303546 container start 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 07:13:47 compute-0 podman[76740]: 2025-12-13 07:13:47.976132918 +0000 UTC m=+0.073334654 container attach 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:47 compute-0 podman[76692]: 2025-12-13 07:13:47.977945496 +0000 UTC m=+0.438300235 container remove 2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a (image=quay.io/ceph/ceph:v20, name=amazing_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:47 compute-0 systemd[1]: libpod-conmon-2667b3d5af6d0cec7731e8836bc191e0a46c70837632db045cf4e447e5d4b56a.scope: Deactivated successfully.
Dec 13 07:13:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:47 compute-0 ceph-mon[74928]: Added host compute-0
Dec 13 07:13:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:13:47 compute-0 ceph-mon[74928]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:47 compute-0 ceph-mon[74928]: Saving service mon spec with placement count:5
Dec 13 07:13:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:48 compute-0 podman[76740]: 2025-12-13 07:13:47.917586827 +0000 UTC m=+0.014788583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:48 compute-0 podman[76770]: 2025-12-13 07:13:48.025514559 +0000 UTC m=+0.026609851 container create 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1df3459a4a7d0c55b3fb2b0b9a141e34c1f0f38a6fcbe89bd51a3525bf82bae-merged.mount: Deactivated successfully.
Dec 13 07:13:48 compute-0 mystifying_franklin[76755]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 07:13:48 compute-0 systemd[1]: Started libpod-conmon-51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278.scope.
Dec 13 07:13:48 compute-0 systemd[1]: libpod-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope: Deactivated successfully.
Dec 13 07:13:48 compute-0 conmon[76755]: conmon 919c9273aad726c7a392 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope/container/memory.events
Dec 13 07:13:48 compute-0 podman[76740]: 2025-12-13 07:13:48.057854219 +0000 UTC m=+0.155055955 container died 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 07:13:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5e61bf81b707fcbdf27747d5f17afa77f55dfc542c3397f9571c94baa9d8194-merged.mount: Deactivated successfully.
Dec 13 07:13:48 compute-0 podman[76770]: 2025-12-13 07:13:48.079591147 +0000 UTC m=+0.080686428 container init 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:48 compute-0 podman[76740]: 2025-12-13 07:13:48.086390921 +0000 UTC m=+0.183592657 container remove 919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b (image=quay.io/ceph/ceph:v20, name=mystifying_franklin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:13:48 compute-0 podman[76770]: 2025-12-13 07:13:48.08834771 +0000 UTC m=+0.089442981 container start 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 07:13:48 compute-0 podman[76770]: 2025-12-13 07:13:48.095511007 +0000 UTC m=+0.096606298 container attach 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:48 compute-0 systemd[1]: libpod-conmon-919c9273aad726c7a39268fbd4acfd1114c71d7437004a18a6fc83bdb6b6866b.scope: Deactivated successfully.
Dec 13 07:13:48 compute-0 podman[76770]: 2025-12-13 07:13:48.014235233 +0000 UTC m=+0.015330534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:48 compute-0 sudo[76594]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 13 07:13:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:48 compute-0 sudo[76798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:48 compute-0 sudo[76798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:48 compute-0 sudo[76798]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:48 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:48 compute-0 sudo[76825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 07:13:48 compute-0 sudo[76825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:48 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:48 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service crash spec with placement *
Dec 13 07:13:48 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 13 07:13:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 07:13:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:48 compute-0 vigorous_feistel[76783]: Scheduled crash update...
Dec 13 07:13:48 compute-0 systemd[1]: libpod-51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278.scope: Deactivated successfully.
Dec 13 07:13:48 compute-0 sudo[76825]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:48 compute-0 podman[76887]: 2025-12-13 07:13:48.468030238 +0000 UTC m=+0.018597546 container died 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c51760f9f9e3f87c2cc44a94c2d219bcfde17b3fd0f6b4721e8c605c13627b3b-merged.mount: Deactivated successfully.
Dec 13 07:13:48 compute-0 podman[76887]: 2025-12-13 07:13:48.485681785 +0000 UTC m=+0.036249092 container remove 51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278 (image=quay.io/ceph/ceph:v20, name=vigorous_feistel, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:48 compute-0 systemd[1]: libpod-conmon-51c1ed5dc16c80f606ddcf3689a21e980d916efa4cffec951151e3cfc7b65278.scope: Deactivated successfully.
Dec 13 07:13:48 compute-0 sudo[76893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:48 compute-0 sudo[76893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:48 compute-0 sudo[76893]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.53302902 +0000 UTC m=+0.028336256 container create a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:48 compute-0 sudo[76925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:13:48 compute-0 sudo[76925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:48 compute-0 systemd[1]: Started libpod-conmon-a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b.scope.
Dec 13 07:13:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.590135665 +0000 UTC m=+0.085442920 container init a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.595003838 +0000 UTC m=+0.090311083 container start a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.597464854 +0000 UTC m=+0.092772109 container attach a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.520250977 +0000 UTC m=+0.015558242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:48 compute-0 podman[77020]: 2025-12-13 07:13:48.891272391 +0000 UTC m=+0.046395847 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:13:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 13 07:13:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/113461477' entity='client.admin' 
Dec 13 07:13:48 compute-0 systemd[1]: libpod-a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b.scope: Deactivated successfully.
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.920623573 +0000 UTC m=+0.415930809 container died a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-651281a5a237fc10c87f7d45bd56b1fc13ab68c1519e179ff9a75189eec31324-merged.mount: Deactivated successfully.
Dec 13 07:13:48 compute-0 podman[76923]: 2025-12-13 07:13:48.944758149 +0000 UTC m=+0.440065384 container remove a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b (image=quay.io/ceph/ceph:v20, name=angry_shaw, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:13:48 compute-0 systemd[1]: libpod-conmon-a085c3f3c87add73250f1ee65dfbc598c543b92df4323955faa9961a0bd5459b.scope: Deactivated successfully.
Dec 13 07:13:48 compute-0 podman[77020]: 2025-12-13 07:13:48.971718015 +0000 UTC m=+0.126841472 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:13:48 compute-0 podman[77049]: 2025-12-13 07:13:48.996147183 +0000 UTC m=+0.031432246 container create 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 13 07:13:49 compute-0 systemd[1]: Started libpod-conmon-0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02.scope.
Dec 13 07:13:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:49 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:49 compute-0 podman[77049]: 2025-12-13 07:13:49.054293934 +0000 UTC m=+0.089579016 container init 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:13:49 compute-0 podman[77049]: 2025-12-13 07:13:49.061267254 +0000 UTC m=+0.096552316 container start 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:49 compute-0 podman[77049]: 2025-12-13 07:13:49.062346644 +0000 UTC m=+0.097631706 container attach 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:49 compute-0 podman[77049]: 2025-12-13 07:13:48.984314407 +0000 UTC m=+0.019599489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:49 compute-0 ceph-mon[74928]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:49 compute-0 ceph-mon[74928]: Saving service mgr spec with placement count:2
Dec 13 07:13:49 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:49 compute-0 ceph-mon[74928]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:49 compute-0 ceph-mon[74928]: Saving service crash spec with placement *
Dec 13 07:13:49 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:49 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:49 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/113461477' entity='client.admin' 
Dec 13 07:13:49 compute-0 sudo[76925]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:49 compute-0 sudo[77131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:49 compute-0 sudo[77131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:49 compute-0 sudo[77131]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:49 compute-0 sudo[77156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:13:49 compute-0 sudo[77156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:49 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 13 07:13:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:49 compute-0 systemd[1]: libpod-0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02.scope: Deactivated successfully.
Dec 13 07:13:49 compute-0 podman[77049]: 2025-12-13 07:13:49.403766465 +0000 UTC m=+0.439051527 container died 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f0102fb24047bd1132c317cbf12d1dca4155d17fc330df05ac0eefaff1059a-merged.mount: Deactivated successfully.
Dec 13 07:13:49 compute-0 podman[77049]: 2025-12-13 07:13:49.441542467 +0000 UTC m=+0.476827529 container remove 0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02 (image=quay.io/ceph/ceph:v20, name=practical_bhaskara, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:49 compute-0 systemd[1]: libpod-conmon-0b8f07f5bfba23797a0eefebdf8527402b446ea41040482e58e9122c99734b02.scope: Deactivated successfully.
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.497679558 +0000 UTC m=+0.032638914 container create b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 07:13:49 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77212 (sysctl)
Dec 13 07:13:49 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 13 07:13:49 compute-0 systemd[1]: Started libpod-conmon-b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142.scope.
Dec 13 07:13:49 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 13 07:13:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.560416519 +0000 UTC m=+0.095375875 container init b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.567214769 +0000 UTC m=+0.102174115 container start b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.568643615 +0000 UTC m=+0.103602961 container attach b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.484494259 +0000 UTC m=+0.019453605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:49 compute-0 sudo[77156]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:49 compute-0 sudo[77261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:49 compute-0 sudo[77261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:49 compute-0 sudo[77261]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:49 compute-0 sudo[77286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 13 07:13:49 compute-0 sudo[77286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:49 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 07:13:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:49 compute-0 ceph-mgr[75200]: [cephadm INFO root] Added label _admin to host compute-0
Dec 13 07:13:49 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 13 07:13:49 compute-0 reverent_kilby[77219]: Added label _admin to host compute-0
Dec 13 07:13:49 compute-0 systemd[1]: libpod-b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142.scope: Deactivated successfully.
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.929738388 +0000 UTC m=+0.464697734 container died b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-016e6141d7190888b0c651d223df1ba71c6fb81017726a27be40a028dbab9bc6-merged.mount: Deactivated successfully.
Dec 13 07:13:49 compute-0 podman[77193]: 2025-12-13 07:13:49.950883151 +0000 UTC m=+0.485842497 container remove b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142 (image=quay.io/ceph/ceph:v20, name=reverent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:13:49 compute-0 systemd[1]: libpod-conmon-b81b90d10f82c3ee5c194b0b5bccba3bafa014e53adb3da1b2db07e9384af142.scope: Deactivated successfully.
Dec 13 07:13:50 compute-0 podman[77321]: 2025-12-13 07:13:49.999942726 +0000 UTC m=+0.031253119 container create 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:50 compute-0 systemd[1]: Started libpod-conmon-2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc.scope.
Dec 13 07:13:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:50 compute-0 podman[77321]: 2025-12-13 07:13:50.074470082 +0000 UTC m=+0.105780496 container init 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:50 compute-0 podman[77321]: 2025-12-13 07:13:50.080617641 +0000 UTC m=+0.111928034 container start 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:13:50 compute-0 podman[77321]: 2025-12-13 07:13:49.988662469 +0000 UTC m=+0.019972882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:50 compute-0 sudo[77286]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:50 compute-0 sudo[77356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:50 compute-0 sudo[77356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:50 compute-0 sudo[77356]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:50 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:50 compute-0 ceph-mon[74928]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:50 compute-0 sudo[77401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- inventory --format=json-pretty --filter-for-batch
Dec 13 07:13:50 compute-0 sudo[77401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 13 07:13:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1386230068' entity='client.admin' 
Dec 13 07:13:50 compute-0 hardcore_leavitt[77341]: set mgr/dashboard/cluster/status
Dec 13 07:13:50 compute-0 systemd[1]: libpod-2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc.scope: Deactivated successfully.
Dec 13 07:13:51 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:51 compute-0 ceph-mon[74928]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:13:51 compute-0 ceph-mon[74928]: Added label _admin to host compute-0
Dec 13 07:13:51 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1386230068' entity='client.admin' 
Dec 13 07:13:51 compute-0 podman[77321]: 2025-12-13 07:13:51.648508501 +0000 UTC m=+1.679818894 container attach 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:51 compute-0 podman[77321]: 2025-12-13 07:13:51.649059037 +0000 UTC m=+1.680369419 container died 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7851f01e58ecfeceb28e5dc18b34d25d1884478a0e5c90d8c4f2a2b00859324-merged.mount: Deactivated successfully.
Dec 13 07:13:51 compute-0 podman[77321]: 2025-12-13 07:13:51.671268422 +0000 UTC m=+1.702578814 container remove 2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc (image=quay.io/ceph/ceph:v20, name=hardcore_leavitt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:13:51 compute-0 systemd[1]: Reloading.
Dec 13 07:13:51 compute-0 systemd-sysv-generator[77486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:51 compute-0 systemd-rc-local-generator[77482]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.811996159 +0000 UTC m=+0.032394685 container create f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.799790472 +0000 UTC m=+0.020189007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:13:51 compute-0 systemd[1]: libpod-conmon-2496c47064739a0730a50459da8e107c9f62171eff56514322be5a5b48d143bc.scope: Deactivated successfully.
Dec 13 07:13:51 compute-0 systemd[1]: Started libpod-conmon-f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd.scope.
Dec 13 07:13:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.951575979 +0000 UTC m=+0.171974494 container init f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.957803476 +0000 UTC m=+0.178202002 container start f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.959005697 +0000 UTC m=+0.179404213 container attach f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:51 compute-0 unruffled_bhabha[77501]: 167 167
Dec 13 07:13:51 compute-0 systemd[1]: libpod-f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd.scope: Deactivated successfully.
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.962843492 +0000 UTC m=+0.183242008 container died f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:51 compute-0 sudo[73993]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc032957a749dd65b25ae0365014108e336f667e366ff812e68ef5da8a5c50cc-merged.mount: Deactivated successfully.
Dec 13 07:13:51 compute-0 podman[77470]: 2025-12-13 07:13:51.982703761 +0000 UTC m=+0.203102277 container remove f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:51 compute-0 systemd[1]: libpod-conmon-f8758a7744e75ff461484070444dface6fd180749f148d3a6402bb8fd65d6dfd.scope: Deactivated successfully.
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.096395138 +0000 UTC m=+0.028162829 container create 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:13:52 compute-0 systemd[1]: Started libpod-conmon-15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278.scope.
Dec 13 07:13:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.155398327 +0000 UTC m=+0.087166029 container init 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.159987095 +0000 UTC m=+0.091754776 container start 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.161451158 +0000 UTC m=+0.093218839 container attach 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.085176847 +0000 UTC m=+0.016944529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:13:52 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:52 compute-0 sudo[77564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuqsazbnevhlqlwfcelbrsdurcqelrcu ; /usr/bin/python3'
Dec 13 07:13:52 compute-0 sudo[77564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:52 compute-0 python3[77566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.393386041 +0000 UTC m=+0.028875720 container create 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:13:52 compute-0 systemd[1]: Started libpod-conmon-3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44.scope.
Dec 13 07:13:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c37f0822196e2ceca9dd20459e587956bb8b35dd73b42dc6e61330eedc0bd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c37f0822196e2ceca9dd20459e587956bb8b35dd73b42dc6e61330eedc0bd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.434896052 +0000 UTC m=+0.070385731 container init 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.439395913 +0000 UTC m=+0.074885591 container start 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.441467156 +0000 UTC m=+0.076956856 container attach 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.383251688 +0000 UTC m=+0.018741387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]: [
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:     {
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "available": false,
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "being_replaced": false,
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "ceph_device_lvm": false,
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "lsm_data": {},
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "lvs": [],
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "path": "/dev/sr0",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "rejected_reasons": [
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "Insufficient space (<5GB)",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "Has a FileSystem"
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         ],
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         "sys_api": {
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "actuators": null,
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "device_nodes": [
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:                 "sr0"
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             ],
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "devname": "sr0",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "human_readable_size": "474.00 KB",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "id_bus": "ata",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "model": "QEMU DVD-ROM",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "nr_requests": "64",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "parent": "/dev/sr0",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "partitions": {},
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "path": "/dev/sr0",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "removable": "1",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "rev": "2.5+",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "ro": "0",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "rotational": "1",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "sas_address": "",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "sas_device_handle": "",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "scheduler_mode": "mq-deadline",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "sectors": 0,
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "sectorsize": "2048",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "size": 485376.0,
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "support_discard": "2048",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "type": "disk",
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:             "vendor": "QEMU"
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:         }
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]:     }
Dec 13 07:13:52 compute-0 dazzling_hodgkin[77536]: ]
Dec 13 07:13:52 compute-0 systemd[1]: libpod-15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278.scope: Deactivated successfully.
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.542120502 +0000 UTC m=+0.473888183 container died 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:52 compute-0 podman[77523]: 2025-12-13 07:13:52.560917672 +0000 UTC m=+0.492685343 container remove 15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:13:52 compute-0 systemd[1]: libpod-conmon-15f1b3d796270ddbf6cc38009091f932a84fafe634b4e6fdd759d4e4b23e1278.scope: Deactivated successfully.
Dec 13 07:13:52 compute-0 sudo[77401]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:13:52 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 13 07:13:52 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 13 07:13:52 compute-0 sudo[78228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 13 07:13:52 compute-0 sudo[78228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 sudo[78228]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed154acef1e317a46c6b69d9c911aa863d805d7bfdcb025d96d35669f2a1e464-merged.mount: Deactivated successfully.
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054702 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:13:52 compute-0 sudo[78253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph
Dec 13 07:13:52 compute-0 sudo[78253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 sudo[78253]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 sudo[78278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.conf.new
Dec 13 07:13:52 compute-0 sudo[78278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 sudo[78278]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 13 07:13:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3320068912' entity='client.admin' 
Dec 13 07:13:52 compute-0 systemd[1]: libpod-3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44.scope: Deactivated successfully.
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.775384614 +0000 UTC m=+0.410874293 container died 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-36c37f0822196e2ceca9dd20459e587956bb8b35dd73b42dc6e61330eedc0bd3-merged.mount: Deactivated successfully.
Dec 13 07:13:52 compute-0 sudo[78304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:52 compute-0 sudo[78304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 podman[77570]: 2025-12-13 07:13:52.799305898 +0000 UTC m=+0.434795577 container remove 3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44 (image=quay.io/ceph/ceph:v20, name=nice_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:13:52 compute-0 sudo[78304]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 systemd[1]: libpod-conmon-3765322c620d7c62d05e528ae327cdcdc515a1bd856ce1a9205518c3cb068d44.scope: Deactivated successfully.
Dec 13 07:13:52 compute-0 sudo[77564]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 sudo[78339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.conf.new
Dec 13 07:13:52 compute-0 sudo[78339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 sudo[78339]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 sudo[78387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.conf.new
Dec 13 07:13:52 compute-0 sudo[78387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 sudo[78387]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:52 compute-0 sudo[78412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.conf.new
Dec 13 07:13:52 compute-0 sudo[78412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:52 compute-0 sudo[78412]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 13 07:13:53 compute-0 sudo[78437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:53 compute-0 sudo[78437]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 07:13:53 compute-0 sudo[78462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config
Dec 13 07:13:53 compute-0 sudo[78462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78462]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config
Dec 13 07:13:53 compute-0 sudo[78514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78514]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf.new
Dec 13 07:13:53 compute-0 sudo[78565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78565]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:53 compute-0 sudo[78612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78612]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf.new
Dec 13 07:13:53 compute-0 sudo[78637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78637]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf.new
Dec 13 07:13:53 compute-0 sudo[78713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78713]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgrfprewdadjesmliegvroeyqfjogqkg ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610033.0849185-36867-51326405911085/async_wrapper.py j795524582964 30 /home/zuul/.ansible/tmp/ansible-tmp-1765610033.0849185-36867-51326405911085/AnsiballZ_command.py _'
Dec 13 07:13:53 compute-0 sudo[78802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:53 compute-0 sudo[78764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf.new
Dec 13 07:13:53 compute-0 sudo[78764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78764]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf.new /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 07:13:53 compute-0 sudo[78810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78810]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 07:13:53 compute-0 sudo[78835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 13 07:13:53 compute-0 sudo[78835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78835]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 ansible-async_wrapper.py[78808]: Invoked with j795524582964 30 /home/zuul/.ansible/tmp/ansible-tmp-1765610033.0849185-36867-51326405911085/AnsiballZ_command.py _
Dec 13 07:13:53 compute-0 ansible-async_wrapper.py[78879]: Starting module and watcher
Dec 13 07:13:53 compute-0 ansible-async_wrapper.py[78879]: Start watching 78880 (30)
Dec 13 07:13:53 compute-0 ansible-async_wrapper.py[78880]: Start module (78880)
Dec 13 07:13:53 compute-0 ansible-async_wrapper.py[78808]: Return async_wrapper task started.
Dec 13 07:13:53 compute-0 sudo[78802]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph
Dec 13 07:13:53 compute-0 sudo[78860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78860]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:13:53 compute-0 ceph-mon[74928]: Updating compute-0:/etc/ceph/ceph.conf
Dec 13 07:13:53 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3320068912' entity='client.admin' 
Dec 13 07:13:53 compute-0 ceph-mon[74928]: Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.conf
Dec 13 07:13:53 compute-0 ceph-mon[74928]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 07:13:53 compute-0 sudo[78890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.client.admin.keyring.new
Dec 13 07:13:53 compute-0 sudo[78890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78890]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[78915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:53 compute-0 sudo[78915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78915]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 python3[78882]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:13:53 compute-0 sudo[78940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.client.admin.keyring.new
Dec 13 07:13:53 compute-0 sudo[78940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[78940]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 podman[78944]: 2025-12-13 07:13:53.715844171 +0000 UTC m=+0.032246518 container create ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:13:53 compute-0 systemd[1]: Started libpod-conmon-ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156.scope.
Dec 13 07:13:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bc46ecc7846dde6377a8007e7182475c1eeb21760ca8ecaf54e1b64424c8fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bc46ecc7846dde6377a8007e7182475c1eeb21760ca8ecaf54e1b64424c8fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:53 compute-0 podman[78944]: 2025-12-13 07:13:53.763305111 +0000 UTC m=+0.079707467 container init ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:53 compute-0 podman[78944]: 2025-12-13 07:13:53.769252692 +0000 UTC m=+0.085655038 container start ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 07:13:53 compute-0 podman[78944]: 2025-12-13 07:13:53.771240049 +0000 UTC m=+0.087642405 container attach ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:13:53 compute-0 sudo[79004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.client.admin.keyring.new
Dec 13 07:13:53 compute-0 podman[78944]: 2025-12-13 07:13:53.701309665 +0000 UTC m=+0.017712031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:53 compute-0 sudo[79004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[79004]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[79030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.client.admin.keyring.new
Dec 13 07:13:53 compute-0 sudo[79030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[79030]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[79059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 13 07:13:53 compute-0 sudo[79059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[79059]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 07:13:53 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 07:13:53 compute-0 sudo[79099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config
Dec 13 07:13:53 compute-0 sudo[79099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[79099]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:53 compute-0 sudo[79124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config
Dec 13 07:13:53 compute-0 sudo[79124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:53 compute-0 sudo[79124]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 sudo[79149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring.new
Dec 13 07:13:54 compute-0 sudo[79149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79149]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 sudo[79174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:54 compute-0 sudo[79174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79174]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:13:54 compute-0 naughty_germain[78990]: 
Dec 13 07:13:54 compute-0 naughty_germain[78990]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 07:13:54 compute-0 systemd[1]: libpod-ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156.scope: Deactivated successfully.
Dec 13 07:13:54 compute-0 podman[78944]: 2025-12-13 07:13:54.122834911 +0000 UTC m=+0.439237257 container died ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:13:54 compute-0 sudo[79199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring.new
Dec 13 07:13:54 compute-0 sudo[79199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79199]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-04bc46ecc7846dde6377a8007e7182475c1eeb21760ca8ecaf54e1b64424c8fa-merged.mount: Deactivated successfully.
Dec 13 07:13:54 compute-0 podman[78944]: 2025-12-13 07:13:54.148244533 +0000 UTC m=+0.464646879 container remove ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156 (image=quay.io/ceph/ceph:v20, name=naughty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:54 compute-0 systemd[1]: libpod-conmon-ceea0c13a46f6be379ef5012ef9afdaad7cf0383c5c1aabb568fec8654474156.scope: Deactivated successfully.
Dec 13 07:13:54 compute-0 ansible-async_wrapper.py[78880]: Module complete (78880)
Dec 13 07:13:54 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:54 compute-0 sudo[79259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring.new
Dec 13 07:13:54 compute-0 sudo[79259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79259]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 sudo[79284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring.new
Dec 13 07:13:54 compute-0 sudo[79284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79284]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 sudo[79309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-00fdae1b-7fad-5f1b-8734-ba4d9298a6de/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring.new /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 07:13:54 compute-0 sudo[79309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79309]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:13:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:54 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev a606794a-c965-49eb-88dd-648db5ecfed2 (Updating crash deployment (+1 -> 1))
Dec 13 07:13:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 13 07:13:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 07:13:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 07:13:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:54 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:54 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 13 07:13:54 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 13 07:13:54 compute-0 sudo[79334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:54 compute-0 sudo[79334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 sudo[79334]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:54 compute-0 sudo[79359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:54 compute-0 sudo[79359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.717727118 +0000 UTC m=+0.026309053 container create 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:13:54 compute-0 systemd[1]: Started libpod-conmon-991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e.scope.
Dec 13 07:13:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:54 compute-0 sudo[79482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mptclovbidnnfqbaixubzuliqtxkocaz ; /usr/bin/python3'
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.759240876 +0000 UTC m=+0.067822812 container init 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:54 compute-0 sudo[79482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.765270613 +0000 UTC m=+0.073852549 container start 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.766429332 +0000 UTC m=+0.075011266 container attach 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:13:54 compute-0 gracious_poitras[79476]: 167 167
Dec 13 07:13:54 compute-0 systemd[1]: libpod-991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e.scope: Deactivated successfully.
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.768344242 +0000 UTC m=+0.076926177 container died 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f4c9349fac31bfe111eda279c3f74526e525167c24ee2c1fbff467903d6dcfd-merged.mount: Deactivated successfully.
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.784198599 +0000 UTC m=+0.092780534 container remove 991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poitras, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:13:54 compute-0 podman[79443]: 2025-12-13 07:13:54.707578127 +0000 UTC m=+0.016160072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:13:54 compute-0 systemd[1]: libpod-conmon-991ecccef7f6c0fac0eee4769d013507b751594afca639e6a78ade442cc3828e.scope: Deactivated successfully.
Dec 13 07:13:54 compute-0 systemd[1]: Reloading.
Dec 13 07:13:54 compute-0 systemd-sysv-generator[79523]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:54 compute-0 systemd-rc-local-generator[79520]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:54 compute-0 python3[79485]: ansible-ansible.legacy.async_status Invoked with jid=j795524582964.78808 mode=status _async_dir=/root/.ansible_async
Dec 13 07:13:54 compute-0 sudo[79482]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:55 compute-0 sudo[79579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cenvmplcjffpaqolnpvzmqhixggussov ; /usr/bin/python3'
Dec 13 07:13:55 compute-0 sudo[79579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:55 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:55 compute-0 systemd[1]: Reloading.
Dec 13 07:13:55 compute-0 systemd-sysv-generator[79613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:55 compute-0 systemd-rc-local-generator[79609]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:55 compute-0 python3[79583]: ansible-ansible.legacy.async_status Invoked with jid=j795524582964.78808 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 07:13:55 compute-0 sudo[79579]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:55 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:13:55 compute-0 ceph-mon[74928]: Updating compute-0:/var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/config/ceph.client.admin.keyring
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 07:13:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:55 compute-0 ceph-mon[74928]: Deploying daemon crash.compute-0 on compute-0
Dec 13 07:13:55 compute-0 podman[79662]: 2025-12-13 07:13:55.407918064 +0000 UTC m=+0.029913200 container create 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 07:13:55 compute-0 sudo[79694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daelubisdvxlibhuqdjcqztfzbngykng ; /usr/bin/python3'
Dec 13 07:13:55 compute-0 sudo[79694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3897b70dc7cab4fb8705dd286c4e00c11f6f6071eb2e509679c4e4e93e2d82ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:55 compute-0 podman[79662]: 2025-12-13 07:13:55.448717228 +0000 UTC m=+0.070712364 container init 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:13:55 compute-0 podman[79662]: 2025-12-13 07:13:55.453975935 +0000 UTC m=+0.075971062 container start 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:13:55 compute-0 bash[79662]: 8e6a4f61ea03b0deb3b22f2359f4e4ace46ba5e323139a1f6359205360a9b0cc
Dec 13 07:13:55 compute-0 podman[79662]: 2025-12-13 07:13:55.396423633 +0000 UTC m=+0.018418759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:13:55 compute-0 systemd[1]: Started Ceph crash.compute-0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:55 compute-0 sudo[79359]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev a606794a-c965-49eb-88dd-648db5ecfed2 (Updating crash deployment (+1 -> 1))
Dec 13 07:13:55 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event a606794a-c965-49eb-88dd-648db5ecfed2 (Updating crash deployment (+1 -> 1)) in 1 seconds
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:55 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 9fd020bf-18ac-442a-a411-2432b8213490 (Updating mgr deployment (+1 -> 2))
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:13:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:55 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:55 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ndpimg on compute-0
Dec 13 07:13:55 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ndpimg on compute-0
Dec 13 07:13:55 compute-0 python3[79698]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 07:13:55 compute-0 sudo[79708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:55 compute-0 sudo[79708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:55 compute-0 sudo[79708]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:55 compute-0 sudo[79694]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.581+0000 7f1275645640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.581+0000 7f1275645640 -1 AuthRegistry(0x7f1270052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.585+0000 7f1275645640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.585+0000 7f1275645640 -1 AuthRegistry(0x7f1275643fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.586+0000 7f126effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: 2025-12-13T07:13:55.587+0000 7f1275645640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 13 07:13:55 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-crash-compute-0[79701]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 13 07:13:55 compute-0 sudo[79735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:55 compute-0 sudo[79735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:55 compute-0 sudo[79805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cenmhdkbnykjtdkhhdhdlnrtynbqzpgr ; /usr/bin/python3'
Dec 13 07:13:55 compute-0 sudo[79805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.907141197 +0000 UTC m=+0.028940171 container create 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:13:55 compute-0 systemd[1]: Started libpod-conmon-671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3.scope.
Dec 13 07:13:55 compute-0 python3[79811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:13:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.953816629 +0000 UTC m=+0.075615623 container init 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.960118247 +0000 UTC m=+0.081917221 container start 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.96351894 +0000 UTC m=+0.085317914 container attach 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:55 compute-0 clever_knuth[79842]: 167 167
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.965176837 +0000 UTC m=+0.086975812 container died 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:13:55 compute-0 systemd[1]: libpod-671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3.scope: Deactivated successfully.
Dec 13 07:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f7151c6361dc9d5636e3688b90b3ba75de0aa09b025f71d95c88ac41bb4ba95-merged.mount: Deactivated successfully.
Dec 13 07:13:55 compute-0 podman[79844]: 2025-12-13 07:13:55.980339313 +0000 UTC m=+0.037627704 container create 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.982509003 +0000 UTC m=+0.104307977 container remove 671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:55 compute-0 podman[79829]: 2025-12-13 07:13:55.896084379 +0000 UTC m=+0.017883363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:13:55 compute-0 systemd[1]: libpod-conmon-671ca014afe2ed695d1fce2d6df1dc620b9f9e0f76a6d2c8789ed912402534a3.scope: Deactivated successfully.
Dec 13 07:13:56 compute-0 systemd[1]: Started libpod-conmon-8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4.scope.
Dec 13 07:13:56 compute-0 systemd[1]: Reloading.
Dec 13 07:13:56 compute-0 podman[79844]: 2025-12-13 07:13:55.962537616 +0000 UTC m=+0.019826016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:56 compute-0 systemd-rc-local-generator[79892]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:56 compute-0 systemd-sysv-generator[79896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:56 compute-0 ceph-mgr[75200]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 07:13:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 podman[79844]: 2025-12-13 07:13:56.228492408 +0000 UTC m=+0.285780788 container init 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:13:56 compute-0 podman[79844]: 2025-12-13 07:13:56.234896448 +0000 UTC m=+0.292184848 container start 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:13:56 compute-0 podman[79844]: 2025-12-13 07:13:56.235929921 +0000 UTC m=+0.293218311 container attach 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:56 compute-0 systemd[1]: Reloading.
Dec 13 07:13:56 compute-0 systemd-rc-local-generator[79933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:13:56 compute-0 systemd-sysv-generator[79937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:13:56 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ndpimg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:13:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:56 compute-0 ceph-mon[74928]: Deploying daemon mgr.compute-0.ndpimg on compute-0
Dec 13 07:13:56 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:13:56 compute-0 vibrant_solomon[79870]: 
Dec 13 07:13:56 compute-0 vibrant_solomon[79870]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 07:13:56 compute-0 systemd[1]: libpod-8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4.scope: Deactivated successfully.
Dec 13 07:13:56 compute-0 podman[79844]: 2025-12-13 07:13:56.582578594 +0000 UTC m=+0.639866975 container died 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbe0c9a9eb4baa3442a907d4c209ad3a4977903a2a0ba4a425e2f6a48f3e6455-merged.mount: Deactivated successfully.
Dec 13 07:13:56 compute-0 podman[79844]: 2025-12-13 07:13:56.613863462 +0000 UTC m=+0.671151852 container remove 8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4 (image=quay.io/ceph/ceph:v20, name=vibrant_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:13:56 compute-0 systemd[1]: libpod-conmon-8f61abaa953fa280a84dd573546b218c0a249cddf5c186aff4a4bb28d3bdb5d4.scope: Deactivated successfully.
Dec 13 07:13:56 compute-0 sudo[79805]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:56 compute-0 podman[80008]: 2025-12-13 07:13:56.646007224 +0000 UTC m=+0.052293325 container create 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de/merged/var/lib/ceph/mgr/ceph-compute-0.ndpimg supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:56 compute-0 podman[80008]: 2025-12-13 07:13:56.684778637 +0000 UTC m=+0.091064758 container init 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 07:13:56 compute-0 podman[80008]: 2025-12-13 07:13:56.689406568 +0000 UTC m=+0.095692669 container start 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:13:56 compute-0 bash[80008]: 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88
Dec 13 07:13:56 compute-0 podman[80008]: 2025-12-13 07:13:56.633533362 +0000 UTC m=+0.039819483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:13:56 compute-0 systemd[1]: Started Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:13:56 compute-0 ceph-mgr[80033]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:13:56 compute-0 ceph-mgr[80033]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 07:13:56 compute-0 ceph-mgr[80033]: pidfile_write: ignore empty --pid-file
Dec 13 07:13:56 compute-0 sudo[79735]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 07:13:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 9fd020bf-18ac-442a-a411-2432b8213490 (Updating mgr deployment (+1 -> 2))
Dec 13 07:13:56 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 9fd020bf-18ac-442a-a411-2432b8213490 (Updating mgr deployment (+1 -> 2)) in 1 seconds
Dec 13 07:13:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 07:13:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:56 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'alerts'
Dec 13 07:13:56 compute-0 sudo[80054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:13:56 compute-0 sudo[80054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:56 compute-0 sudo[80054]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:56 compute-0 sudo[80079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:56 compute-0 sudo[80079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:56 compute-0 sudo[80079]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:56 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'balancer'
Dec 13 07:13:56 compute-0 sudo[80145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkkcbduziydltgcdxgnomugbfirvpmyd ; /usr/bin/python3'
Dec 13 07:13:56 compute-0 sudo[80145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:56 compute-0 sudo[80110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:13:56 compute-0 sudo[80110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:56 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'cephadm'
Dec 13 07:13:56 compute-0 python3[80153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.030314618 +0000 UTC m=+0.036193237 container create abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:13:57 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:57 compute-0 systemd[1]: Started libpod-conmon-abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2.scope.
Dec 13 07:13:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.080055675 +0000 UTC m=+0.085934304 container init abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.086512674 +0000 UTC m=+0.092391293 container start abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.087850179 +0000 UTC m=+0.093728798 container attach abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.018553566 +0000 UTC m=+0.024432205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:57 compute-0 podman[80215]: 2025-12-13 07:13:57.22710685 +0000 UTC m=+0.043413271 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:57 compute-0 podman[80215]: 2025-12-13 07:13:57.30116235 +0000 UTC m=+0.117468771 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1962733887' entity='client.admin' 
Dec 13 07:13:57 compute-0 systemd[1]: libpod-abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2.scope: Deactivated successfully.
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.438272617 +0000 UTC m=+0.444151226 container died abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e319d0654aa2e360cdbaea7a828ed62213e5c8c9cb6e633ff2cd10d7be535a7-merged.mount: Deactivated successfully.
Dec 13 07:13:57 compute-0 podman[80155]: 2025-12-13 07:13:57.472074997 +0000 UTC m=+0.477953616 container remove abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2 (image=quay.io/ceph/ceph:v20, name=epic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:57 compute-0 systemd[1]: libpod-conmon-abb29f84e0e380833c1552e9ce55abe5d64c5d30cf4018d326ccb8a38616b1b2.scope: Deactivated successfully.
Dec 13 07:13:57 compute-0 sudo[80145]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:57 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'crash'
Dec 13 07:13:57 compute-0 sudo[80362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehtulnbslivmrmadkdnlzjhuibmkxbdg ; /usr/bin/python3'
Dec 13 07:13:57 compute-0 sudo[80362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:57 compute-0 sudo[80110]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'dashboard'
Dec 13 07:13:57 compute-0 sudo[80365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:13:57 compute-0 sudo[80365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:57 compute-0 sudo[80365]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:13:57 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 07:13:57 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:57 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 07:13:57 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 07:13:57 compute-0 python3[80364]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:13:57 compute-0 sudo[80390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1962733887' entity='client.admin' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 13 07:13:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:57 compute-0 sudo[80390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:57 compute-0 sudo[80390]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:57 compute-0 sudo[80417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:57 compute-0 sudo[80417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:57 compute-0 podman[80414]: 2025-12-13 07:13:57.790561032 +0000 UTC m=+0.041704488 container create 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:13:57 compute-0 systemd[1]: Started libpod-conmon-2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8.scope.
Dec 13 07:13:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:57 compute-0 podman[80414]: 2025-12-13 07:13:57.840748398 +0000 UTC m=+0.091891863 container init 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 07:13:57 compute-0 podman[80414]: 2025-12-13 07:13:57.845216338 +0000 UTC m=+0.096359793 container start 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:57 compute-0 podman[80414]: 2025-12-13 07:13:57.846303241 +0000 UTC m=+0.097446686 container attach 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:13:57 compute-0 podman[80414]: 2025-12-13 07:13:57.778066552 +0000 UTC m=+0.029210007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:58 compute-0 podman[80489]: 2025-12-13 07:13:58.049932758 +0000 UTC m=+0.027246516 container create df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:13:58 compute-0 systemd[1]: Started libpod-conmon-df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a.scope.
Dec 13 07:13:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:58 compute-0 podman[80489]: 2025-12-13 07:13:58.088046855 +0000 UTC m=+0.065360622 container init df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:13:58 compute-0 podman[80489]: 2025-12-13 07:13:58.091896804 +0000 UTC m=+0.069210561 container start df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:58 compute-0 podman[80489]: 2025-12-13 07:13:58.093515938 +0000 UTC m=+0.070829695 container attach df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:13:58 compute-0 hardcore_hermann[80501]: 167 167
Dec 13 07:13:58 compute-0 systemd[1]: libpod-df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a.scope: Deactivated successfully.
Dec 13 07:13:58 compute-0 podman[80506]: 2025-12-13 07:13:58.125333066 +0000 UTC m=+0.021583760 container died df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d71c05acfa5e92d7e056f3576202eb85611d65538b171ecaa093dd2658c6bd4-merged.mount: Deactivated successfully.
Dec 13 07:13:58 compute-0 podman[80489]: 2025-12-13 07:13:58.038815907 +0000 UTC m=+0.016129685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:58 compute-0 podman[80506]: 2025-12-13 07:13:58.143599439 +0000 UTC m=+0.039850122 container remove df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a (image=quay.io/ceph/ceph:v20, name=hardcore_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:58 compute-0 systemd[1]: libpod-conmon-df8c0ae5491626a85a2b282c2dbdc7a0a22b0db3461505cf7d6733767d26d40a.scope: Deactivated successfully.
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1381470394' entity='client.admin' 
Dec 13 07:13:58 compute-0 systemd[1]: libpod-2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8.scope: Deactivated successfully.
Dec 13 07:13:58 compute-0 podman[80414]: 2025-12-13 07:13:58.17662604 +0000 UTC m=+0.427769485 container died 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:13:58 compute-0 sudo[80417]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:58 compute-0 ceph-mgr[75200]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 13 07:13:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 07:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f46e7c661ec1f0eee69d3a7deda7130356902892e720de21f8663b91e970dd-merged.mount: Deactivated successfully.
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:58 compute-0 podman[80414]: 2025-12-13 07:13:58.199113248 +0000 UTC m=+0.450256692 container remove 2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8 (image=quay.io/ceph/ceph:v20, name=hardcore_hopper, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:58 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qsherl (unknown last config time)...
Dec 13 07:13:58 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qsherl (unknown last config time)...
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qsherl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.qsherl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:58 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qsherl on compute-0
Dec 13 07:13:58 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qsherl on compute-0
Dec 13 07:13:58 compute-0 sudo[80362]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:58 compute-0 systemd[1]: libpod-conmon-2d87ef2841aa716899a3c197a635034c745a3e6db883a0d7aee2ca9894f8d6d8.scope: Deactivated successfully.
Dec 13 07:13:58 compute-0 sudo[80529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:58 compute-0 sudo[80529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:58 compute-0 sudo[80529]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:58 compute-0 sudo[80554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:13:58 compute-0 sudo[80554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:58 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'devicehealth'
Dec 13 07:13:58 compute-0 sudo[80602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltwsfwvufmyifagtdyjtmhhtolyzrahv ; /usr/bin/python3'
Dec 13 07:13:58 compute-0 sudo[80602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:13:58 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 07:13:58 compute-0 python3[80604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:13:58 compute-0 podman[80616]: 2025-12-13 07:13:58.509722113 +0000 UTC m=+0.032863075 container create 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 07:13:58 compute-0 systemd[1]: Started libpod-conmon-5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a.scope.
Dec 13 07:13:58 compute-0 ansible-async_wrapper.py[78879]: Done in kid B.
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.544843845 +0000 UTC m=+0.042786754 container create f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:13:58 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg[80029]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 07:13:58 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg[80029]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 07:13:58 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg[80029]:   from numpy import show_config as show_numpy_config
Dec 13 07:13:58 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'influx'
Dec 13 07:13:58 compute-0 podman[80616]: 2025-12-13 07:13:58.561535566 +0000 UTC m=+0.084676548 container init 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:58 compute-0 podman[80616]: 2025-12-13 07:13:58.566347403 +0000 UTC m=+0.089488365 container start 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:13:58 compute-0 podman[80616]: 2025-12-13 07:13:58.567380564 +0000 UTC m=+0.090521527 container attach 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 07:13:58 compute-0 podman[80616]: 2025-12-13 07:13:58.497307663 +0000 UTC m=+0.020448646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:58 compute-0 systemd[1]: Started libpod-conmon-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope.
Dec 13 07:13:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.620796991 +0000 UTC m=+0.118739911 container init f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:13:58 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'insights'
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.625063523 +0000 UTC m=+0.123006422 container start f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.529517641 +0000 UTC m=+0.027460550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:13:58 compute-0 silly_feistel[80649]: 167 167
Dec 13 07:13:58 compute-0 systemd[1]: libpod-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope: Deactivated successfully.
Dec 13 07:13:58 compute-0 conmon[80649]: conmon f4c136b0029af81a818b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope/container/memory.events
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.628884216 +0000 UTC m=+0.126827115 container attach f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.629303584 +0000 UTC m=+0.127246483 container died f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:13:58 compute-0 podman[80629]: 2025-12-13 07:13:58.648343832 +0000 UTC m=+0.146286730 container remove f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225 (image=quay.io/ceph/ceph:v20, name=silly_feistel, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:13:58 compute-0 systemd[1]: libpod-conmon-f4c136b0029af81a818bd52422fe1d68fb06ffe5310d4d3a3cbaa514dd7fb225.scope: Deactivated successfully.
Dec 13 07:13:58 compute-0 sudo[80554]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:58 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'iostat'
Dec 13 07:13:58 compute-0 sudo[80681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:13:58 compute-0 sudo[80681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:58 compute-0 sudo[80681]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:58 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'k8sevents'
Dec 13 07:13:58 compute-0 sudo[80706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:13:58 compute-0 sudo[80706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 13 07:13:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 07:13:59 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:13:59 compute-0 ceph-mgr[75200]: [progress INFO root] Writing back 2 completed events
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cb4f343d2aab6bcf7e76a7b26923f3959e1325d18da72b9952f6b0ca12c0650-merged.mount: Deactivated successfully.
Dec 13 07:13:59 compute-0 podman[80769]: 2025-12-13 07:13:59.08555115 +0000 UTC m=+0.038396778 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'localpool'
Dec 13 07:13:59 compute-0 ceph-mon[74928]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 07:13:59 compute-0 ceph-mon[74928]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1381470394' entity='client.admin' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:13:59 compute-0 ceph-mon[74928]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: Reconfiguring mgr.compute-0.qsherl (unknown last config time)...
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.qsherl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:59 compute-0 ceph-mon[74928]: Reconfiguring daemon mgr.compute-0.qsherl on compute-0
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 07:13:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 podman[80769]: 2025-12-13 07:13:59.161722979 +0000 UTC m=+0.114568586 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'mirroring'
Dec 13 07:13:59 compute-0 sudo[80706]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:13:59 compute-0 sudo[80860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:13:59 compute-0 sudo[80860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:13:59 compute-0 sudo[80860]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'nfs'
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 07:13:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 13 07:13:59 compute-0 recursing_pike[80643]: set require_min_compat_client to mimic
Dec 13 07:13:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 13 07:13:59 compute-0 systemd[1]: libpod-5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a.scope: Deactivated successfully.
Dec 13 07:13:59 compute-0 podman[80616]: 2025-12-13 07:13:59.708776426 +0000 UTC m=+1.231917388 container died 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'orchestrator'
Dec 13 07:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0a3906650d1ec75864d39c9b1cef424ec1fd747013068aab9f8adf08e807e7b-merged.mount: Deactivated successfully.
Dec 13 07:13:59 compute-0 podman[80616]: 2025-12-13 07:13:59.735932211 +0000 UTC m=+1.259073173 container remove 5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a (image=quay.io/ceph/ceph:v20, name=recursing_pike, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:13:59 compute-0 systemd[1]: libpod-conmon-5cda0c1ae7f6e2f94c98ce915c84cc83adf22f92de6102ffc9d7e2fef9dad28a.scope: Deactivated successfully.
Dec 13 07:13:59 compute-0 sudo[80602]: pam_unix(sudo:session): session closed for user root
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 07:13:59 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'osd_support'
Dec 13 07:14:00 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 07:14:00 compute-0 sudo[80919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjjxqobaugwzvyqtxogukfrkcjfpggpj ; /usr/bin/python3'
Dec 13 07:14:00 compute-0 sudo[80919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:00 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'progress'
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:00 compute-0 python3[80921]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:00 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'prometheus'
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.230689076 +0000 UTC m=+0.024612474 container create 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:00 compute-0 systemd[1]: Started libpod-conmon-12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d.scope.
Dec 13 07:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.289273359 +0000 UTC m=+0.083196756 container init 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.293486901 +0000 UTC m=+0.087410298 container start 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.294569796 +0000 UTC m=+0.088493194 container attach 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.220821144 +0000 UTC m=+0.014744561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3278627661' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 07:14:00 compute-0 ceph-mon[74928]: osdmap e3: 0 total, 0 up, 0 in
Dec 13 07:14:00 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'rbd_support'
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:00 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'rgw'
Dec 13 07:14:00 compute-0 sudo[80958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:00 compute-0 sudo[80958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:00 compute-0 sudo[80958]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:00 compute-0 sudo[80983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 13 07:14:00 compute-0 sudo[80983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:00 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'rook'
Dec 13 07:14:00 compute-0 sudo[80983]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [cephadm INFO root] Added host compute-0
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 13 07:14:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 54b3d2bd-10ea-49e0-a55f-d5b0fd1d6086 (Updating mgr deployment (-1 -> 1))
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ndpimg from compute-0 -- ports [8765]
Dec 13 07:14:00 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ndpimg from compute-0 -- ports [8765]
Dec 13 07:14:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:00 compute-0 silly_matsumoto[80934]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 07:14:00 compute-0 silly_matsumoto[80934]: Scheduled mon update...
Dec 13 07:14:00 compute-0 silly_matsumoto[80934]: Scheduled mgr update...
Dec 13 07:14:00 compute-0 silly_matsumoto[80934]: Scheduled osd.default_drive_group update...
Dec 13 07:14:00 compute-0 systemd[1]: libpod-12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d.scope: Deactivated successfully.
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.972764699 +0000 UTC m=+0.766688095 container died 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c9ddf781c96ad350072930a964ecf3ca0f99756469cf22794dc35497e877548-merged.mount: Deactivated successfully.
Dec 13 07:14:00 compute-0 sudo[81027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:00 compute-0 sudo[81027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:00 compute-0 podman[80922]: 2025-12-13 07:14:00.999326507 +0000 UTC m=+0.793249904 container remove 12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d (image=quay.io/ceph/ceph:v20, name=silly_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:01 compute-0 sudo[81027]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:01 compute-0 systemd[1]: libpod-conmon-12f15b80f3a73e6a8d1bf26a164fe36710a94286224134ca044a455385aa6a7d.scope: Deactivated successfully.
Dec 13 07:14:01 compute-0 sudo[80919]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:01 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:01 compute-0 sudo[81063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --name mgr.compute-0.ndpimg --force --tcp-ports 8765
Dec 13 07:14:01 compute-0 sudo[81063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:01 compute-0 sudo[81111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wursmxhysvaavtkooichgtesqzwfwxlm ; /usr/bin/python3'
Dec 13 07:14:01 compute-0 sudo[81111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:01 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:14:01 compute-0 python3[81113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:01 compute-0 ceph-mgr[80033]: mgr[py] Loading python module 'selftest'
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.36161339 +0000 UTC m=+0.039154274 container create 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:01 compute-0 systemd[1]: Started libpod-conmon-77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80.scope.
Dec 13 07:14:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:01 compute-0 ceph-mon[74928]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.423713893 +0000 UTC m=+0.101254796 container init 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Dec 13 07:14:01 compute-0 podman[81156]: 2025-12-13 07:14:01.423739711 +0000 UTC m=+0.073527427 container died 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.430513166 +0000 UTC m=+0.108054059 container start 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.431725466 +0000 UTC m=+0.109266349 container attach 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9801f212c1a44328b5743c56063ebc515c709dd2f1586ea86202bedf61a16de-merged.mount: Deactivated successfully.
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.349822874 +0000 UTC m=+0.027363756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:01 compute-0 podman[81156]: 2025-12-13 07:14:01.455851174 +0000 UTC m=+0.105638889 container remove 0ff35a5463d3bbc7d852b4c61818e209fc4953caeb52224a5f4e7570eb1a0d88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:14:01 compute-0 bash[81156]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-ndpimg
Dec 13 07:14:01 compute-0 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.ndpimg.service: Main process exited, code=exited, status=143/n/a
Dec 13 07:14:01 compute-0 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.ndpimg.service: Failed with result 'exit-code'.
Dec 13 07:14:01 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.ndpimg for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:14:01 compute-0 systemd[1]: ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.ndpimg.service: Consumed 5.033s CPU time, 374.6M memory peak, read 0B from disk, written 131.5K to disk.
Dec 13 07:14:01 compute-0 systemd[1]: Reloading.
Dec 13 07:14:01 compute-0 systemd-sysv-generator[81248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:01 compute-0 systemd-rc-local-generator[81245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 07:14:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180421055' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:14:01 compute-0 hopeful_raman[81169]: 
Dec 13 07:14:01 compute-0 hopeful_raman[81169]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":39,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-13T07:13:21:319345+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-13T07:13:21.320643+0000","services":{}},"progress_events":{}}
Dec 13 07:14:01 compute-0 systemd[1]: libpod-77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80.scope: Deactivated successfully.
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.837166462 +0000 UTC m=+0.514707345 container died 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:14:01 compute-0 sudo[81063]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:01 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ndpimg
Dec 13 07:14:01 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ndpimg
Dec 13 07:14:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"} v 0)
Dec 13 07:14:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"} : dispatch
Dec 13 07:14:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"}]': finished
Dec 13 07:14:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 07:14:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 54b3d2bd-10ea-49e0-a55f-d5b0fd1d6086 (Updating mgr deployment (-1 -> 1))
Dec 13 07:14:01 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 54b3d2bd-10ea-49e0-a55f-d5b0fd1d6086 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 13 07:14:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 07:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2a5f9e57a29c3839a3d559ac9a164750511dd64b550251b50d71275cb58526f-merged.mount: Deactivated successfully.
Dec 13 07:14:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:01 compute-0 podman[81139]: 2025-12-13 07:14:01.860908589 +0000 UTC m=+0.538449471 container remove 77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80 (image=quay.io/ceph/ceph:v20, name=hopeful_raman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:14:01 compute-0 systemd[1]: libpod-conmon-77939f5abbbbc27181e2af2f69b7a74bf65c28931e3a8bde325718e81b0eec80.scope: Deactivated successfully.
Dec 13 07:14:01 compute-0 sudo[81111]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:01 compute-0 sudo[81276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:14:01 compute-0 sudo[81276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:01 compute-0 sudo[81276]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:01 compute-0 sudo[81303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:01 compute-0 sudo[81303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:01 compute-0 sudo[81303]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:01 compute-0 sudo[81328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:14:01 compute-0 sudo[81328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:02 compute-0 podman[81388]: 2025-12-13 07:14:02.288042569 +0000 UTC m=+0.040851474 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: Added host compute-0
Dec 13 07:14:02 compute-0 ceph-mon[74928]: Saving service mon spec with placement compute-0
Dec 13 07:14:02 compute-0 ceph-mon[74928]: Saving service mgr spec with placement compute-0
Dec 13 07:14:02 compute-0 ceph-mon[74928]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 07:14:02 compute-0 ceph-mon[74928]: Saving service osd.default_drive_group spec with placement compute-0
Dec 13 07:14:02 compute-0 ceph-mon[74928]: Removing daemon mgr.compute-0.ndpimg from compute-0 -- ports [8765]
Dec 13 07:14:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4180421055' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ndpimg"}]': finished
Dec 13 07:14:02 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 podman[81405]: 2025-12-13 07:14:02.421531618 +0000 UTC m=+0.047827931 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:14:02 compute-0 podman[81388]: 2025-12-13 07:14:02.424155792 +0000 UTC m=+0.176964687 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 07:14:02 compute-0 sudo[81328]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:02 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:02 compute-0 sudo[81465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:02 compute-0 sudo[81465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:02 compute-0 sudo[81465]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:02 compute-0 sudo[81490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:14:02 compute-0 sudo[81490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:02 compute-0 podman[81525]: 2025-12-13 07:14:02.960649325 +0000 UTC m=+0.024243169 container create ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:02 compute-0 systemd[1]: Started libpod-conmon-ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877.scope.
Dec 13 07:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:03 compute-0 podman[81525]: 2025-12-13 07:14:03.011114694 +0000 UTC m=+0.074708548 container init ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:03 compute-0 podman[81525]: 2025-12-13 07:14:03.015996322 +0000 UTC m=+0.079590156 container start ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:14:03 compute-0 podman[81525]: 2025-12-13 07:14:03.017117509 +0000 UTC m=+0.080711344 container attach ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:14:03 compute-0 serene_khayyam[81538]: 167 167
Dec 13 07:14:03 compute-0 systemd[1]: libpod-ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877.scope: Deactivated successfully.
Dec 13 07:14:03 compute-0 podman[81525]: 2025-12-13 07:14:03.020035345 +0000 UTC m=+0.083629178 container died ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 07:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d6d1fd0db588dec5ed01f4097df45d6638dbad364018b26e4f5c83afc564be0-merged.mount: Deactivated successfully.
Dec 13 07:14:03 compute-0 podman[81525]: 2025-12-13 07:14:03.035522362 +0000 UTC m=+0.099116196 container remove ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:03 compute-0 podman[81525]: 2025-12-13 07:14:02.950756747 +0000 UTC m=+0.014350601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:03 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:03 compute-0 systemd[1]: libpod-conmon-ec7d7ab89c7c24970d74db391eb1f4c0c53bb288617eb18a721396f1feb9b877.scope: Deactivated successfully.
Dec 13 07:14:03 compute-0 podman[81560]: 2025-12-13 07:14:03.145269453 +0000 UTC m=+0.027094630 container create 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 07:14:03 compute-0 systemd[1]: Started libpod-conmon-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope.
Dec 13 07:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:03 compute-0 podman[81560]: 2025-12-13 07:14:03.211320715 +0000 UTC m=+0.093145901 container init 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 07:14:03 compute-0 podman[81560]: 2025-12-13 07:14:03.216777715 +0000 UTC m=+0.098602881 container start 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:14:03 compute-0 podman[81560]: 2025-12-13 07:14:03.218095452 +0000 UTC m=+0.099920639 container attach 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:03 compute-0 podman[81560]: 2025-12-13 07:14:03.134849162 +0000 UTC m=+0.016674340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:03 compute-0 ceph-mon[74928]: Removing key for mgr.compute-0.ndpimg
Dec 13 07:14:03 compute-0 ceph-mon[74928]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:03 compute-0 angry_hofstadter[81573]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:14:03 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:03 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:03 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 82d490c1-ea27-486f-9cfe-f392b9710718
Dec 13 07:14:04 compute-0 ceph-mgr[75200]: [progress INFO root] Writing back 3 completed events
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 07:14:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"} v 0)
Dec 13 07:14:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"} : dispatch
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"}]': finished
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 13 07:14:04 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:04 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 13 07:14:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:04 compute-0 lvm[81665]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:04 compute-0 lvm[81665]: VG ceph_vg0 finished
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 13 07:14:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 07:14:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455557760' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]:  stderr: got monmap epoch 1
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: --> Creating keyring file for osd.0
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 13 07:14:04 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 82d490c1-ea27-486f-9cfe-f392b9710718 --setuser ceph --setgroup ceph
Dec 13 07:14:05 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:05 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"} : dispatch
Dec 13 07:14:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3552933581' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82d490c1-ea27-486f-9cfe-f392b9710718"}]': finished
Dec 13 07:14:05 compute-0 ceph-mon[74928]: osdmap e4: 1 total, 0 up, 1 in
Dec 13 07:14:05 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:05 compute-0 ceph-mon[74928]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1455557760' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]:  stderr: 2025-12-13T07:14:04.665+0000 7f04e77ea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]:  stderr: 2025-12-13T07:14:04.684+0000 7f04e77ea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf
Dec 13 07:14:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"} v 0)
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"} : dispatch
Dec 13 07:14:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 13 07:14:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"}]': finished
Dec 13 07:14:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 13 07:14:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:05 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:05 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:05 compute-0 lvm[82606]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:05 compute-0 lvm[82606]: VG ceph_vg1 finished
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:05 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 13 07:14:06 compute-0 ceph-mon[74928]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 07:14:06 compute-0 ceph-mon[74928]: Cluster is now healthy
Dec 13 07:14:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"} : dispatch
Dec 13 07:14:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3438202900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf"}]': finished
Dec 13 07:14:06 compute-0 ceph-mon[74928]: osdmap e5: 2 total, 0 up, 2 in
Dec 13 07:14:06 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:06 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 07:14:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412080431' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]:  stderr: got monmap epoch 1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: --> Creating keyring file for osd.1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf --setuser ceph --setgroup ceph
Dec 13 07:14:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]:  stderr: 2025-12-13T07:14:06.205+0000 7fa9dcd998c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]:  stderr: 2025-12-13T07:14:06.223+0000 7fa9dcd998c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:06 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b927bbdd-6a1c-42b3-b097-3003acae4885
Dec 13 07:14:07 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/412080431' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 07:14:07 compute-0 ceph-mon[74928]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"} v 0)
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"} : dispatch
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"}]': finished
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:07 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:07 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:07 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 13 07:14:07 compute-0 lvm[83546]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:07 compute-0 lvm[83546]: VG ceph_vg2 finished
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 07:14:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714765293' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]:  stderr: got monmap epoch 1
Dec 13 07:14:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: --> Creating keyring file for osd.2
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 13 07:14:07 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b927bbdd-6a1c-42b3-b097-3003acae4885 --setuser ceph --setgroup ceph
Dec 13 07:14:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"} : dispatch
Dec 13 07:14:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2213894842' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b927bbdd-6a1c-42b3-b097-3003acae4885"}]': finished
Dec 13 07:14:08 compute-0 ceph-mon[74928]: osdmap e6: 3 total, 0 up, 3 in
Dec 13 07:14:08 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:08 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:08 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:08 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3714765293' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 07:14:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]:  stderr: 2025-12-13T07:14:07.759+0000 7f2f1ba098c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]:  stderr: 2025-12-13T07:14:07.777+0000 7f2f1ba098c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 07:14:08 compute-0 angry_hofstadter[81573]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 13 07:14:08 compute-0 systemd[1]: libpod-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope: Deactivated successfully.
Dec 13 07:14:08 compute-0 systemd[1]: libpod-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope: Consumed 4.197s CPU time.
Dec 13 07:14:08 compute-0 conmon[81573]: conmon 9780f177353a85440b02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope/container/memory.events
Dec 13 07:14:08 compute-0 podman[84454]: 2025-12-13 07:14:08.456470357 +0000 UTC m=+0.015309914 container died 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 07:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2ec84a22bda216e97086a54e8d9d2e81f1fbcbc41380e0c30a7e78d1d622a88-merged.mount: Deactivated successfully.
Dec 13 07:14:08 compute-0 podman[84454]: 2025-12-13 07:14:08.477640879 +0000 UTC m=+0.036480436 container remove 9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:14:08 compute-0 systemd[1]: libpod-conmon-9780f177353a85440b02d3f788d54f4d0955d87514e28948cf5ded454189efaa.scope: Deactivated successfully.
Dec 13 07:14:08 compute-0 sudo[81490]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:08 compute-0 sudo[84466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:08 compute-0 sudo[84466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:08 compute-0 sudo[84466]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:08 compute-0 sudo[84491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:14:08 compute-0 sudo[84491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.814408095 +0000 UTC m=+0.029356023 container create 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:14:08 compute-0 systemd[1]: Started libpod-conmon-91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d.scope.
Dec 13 07:14:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.871461499 +0000 UTC m=+0.086409426 container init 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.876944427 +0000 UTC m=+0.091892355 container start 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.878021582 +0000 UTC m=+0.092969510 container attach 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:14:08 compute-0 thirsty_roentgen[84539]: 167 167
Dec 13 07:14:08 compute-0 systemd[1]: libpod-91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d.scope: Deactivated successfully.
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.8806798 +0000 UTC m=+0.095627728 container died 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f1dfd8a80c80efbc67f0deb4d743e424addacb971d8d21f113b537b421514e0-merged.mount: Deactivated successfully.
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.898182536 +0000 UTC m=+0.113130464 container remove 91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:08 compute-0 podman[84526]: 2025-12-13 07:14:08.803055341 +0000 UTC m=+0.018003269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:08 compute-0 systemd[1]: libpod-conmon-91303fa0c019e71baf2750f8261ccf659251e09aec7d95d19feb93c318b9ea9d.scope: Deactivated successfully.
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:09.010602873 +0000 UTC m=+0.028553223 container create 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:09 compute-0 systemd[1]: Started libpod-conmon-17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e.scope.
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:14:09 compute-0 ceph-mon[74928]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:09.063494935 +0000 UTC m=+0.081445283 container init 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:09.070326949 +0000 UTC m=+0.088277299 container start 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:09.07173142 +0000 UTC m=+0.089681768 container attach 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:08.999715225 +0000 UTC m=+0.017665594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:09 compute-0 magical_antonelli[84574]: {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:     "0": [
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:         {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "devices": [
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "/dev/loop3"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             ],
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_name": "ceph_lv0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_size": "21470642176",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "name": "ceph_lv0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "tags": {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.crush_device_class": "",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.encrypted": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osd_id": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.type": "block",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.vdo": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.with_tpm": "0"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             },
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "type": "block",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "vg_name": "ceph_vg0"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:         }
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:     ],
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:     "1": [
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:         {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "devices": [
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "/dev/loop4"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             ],
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_name": "ceph_lv1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_size": "21470642176",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "name": "ceph_lv1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "tags": {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.crush_device_class": "",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.encrypted": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osd_id": "1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.type": "block",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.vdo": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.with_tpm": "0"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             },
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "type": "block",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "vg_name": "ceph_vg1"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:         }
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:     ],
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:     "2": [
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:         {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "devices": [
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "/dev/loop5"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             ],
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_name": "ceph_lv2",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_size": "21470642176",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "name": "ceph_lv2",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "tags": {
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.crush_device_class": "",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.encrypted": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osd_id": "2",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.type": "block",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.vdo": "0",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:                 "ceph.with_tpm": "0"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             },
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "type": "block",
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:             "vg_name": "ceph_vg2"
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:         }
Dec 13 07:14:09 compute-0 magical_antonelli[84574]:     ]
Dec 13 07:14:09 compute-0 magical_antonelli[84574]: }
Dec 13 07:14:09 compute-0 systemd[1]: libpod-17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e.scope: Deactivated successfully.
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:09.311405263 +0000 UTC m=+0.329355612 container died 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-89933ee2d1d79bac39c20f4baa1392aaa32622edbf18e54eca214b6729d46a04-merged.mount: Deactivated successfully.
Dec 13 07:14:09 compute-0 podman[84561]: 2025-12-13 07:14:09.333065536 +0000 UTC m=+0.351015885 container remove 17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:09 compute-0 systemd[1]: libpod-conmon-17a78f5e2214de9015bd52719697bd7964aa944c2a1ca6a49c042a0345f3be5e.scope: Deactivated successfully.
Dec 13 07:14:09 compute-0 sudo[84491]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 13 07:14:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 07:14:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 13 07:14:09 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 13 07:14:09 compute-0 sudo[84593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:09 compute-0 sudo[84593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:09 compute-0 sudo[84593]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:09 compute-0 sudo[84618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:14:09 compute-0 sudo[84618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:09 compute-0 podman[84678]: 2025-12-13 07:14:09.748484791 +0000 UTC m=+0.028504250 container create c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:09 compute-0 systemd[1]: Started libpod-conmon-c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87.scope.
Dec 13 07:14:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:09 compute-0 podman[84678]: 2025-12-13 07:14:09.794890066 +0000 UTC m=+0.074909516 container init c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 07:14:09 compute-0 podman[84678]: 2025-12-13 07:14:09.799765102 +0000 UTC m=+0.079784551 container start c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:09 compute-0 podman[84678]: 2025-12-13 07:14:09.800968174 +0000 UTC m=+0.080987623 container attach c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:09 compute-0 stupefied_wilbur[84691]: 167 167
Dec 13 07:14:09 compute-0 systemd[1]: libpod-c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87.scope: Deactivated successfully.
Dec 13 07:14:09 compute-0 podman[84696]: 2025-12-13 07:14:09.832306532 +0000 UTC m=+0.015889885 container died c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:09 compute-0 podman[84678]: 2025-12-13 07:14:09.736918245 +0000 UTC m=+0.016937715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6284b8732a79d90527d5201718ee9a72c37f3defb289445bf5fc2b6fa955c7e-merged.mount: Deactivated successfully.
Dec 13 07:14:09 compute-0 podman[84696]: 2025-12-13 07:14:09.848399347 +0000 UTC m=+0.031982680 container remove c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:09 compute-0 systemd[1]: libpod-conmon-c36ce2110d273970166e855f3451f654a1f70cb958c455b322add2dab8ed3e87.scope: Deactivated successfully.
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.027486634 +0000 UTC m=+0.030509320 container create f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:14:10 compute-0 systemd[1]: Started libpod-conmon-f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d.scope.
Dec 13 07:14:10 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 07:14:10 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:10 compute-0 ceph-mon[74928]: Deploying daemon osd.0 on compute-0
Dec 13 07:14:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.096761254 +0000 UTC m=+0.099783950 container init f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.102117765 +0000 UTC m=+0.105140442 container start f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.103110662 +0000 UTC m=+0.106133348 container attach f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.017146113 +0000 UTC m=+0.020168809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:10 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test[84733]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 07:14:10 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test[84733]:                             [--no-systemd] [--no-tmpfs]
Dec 13 07:14:10 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test[84733]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 07:14:10 compute-0 systemd[1]: libpod-f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d.scope: Deactivated successfully.
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.25672094 +0000 UTC m=+0.259743636 container died f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d492730f843b57727c7f957f4474ed3cbc6586f638a41b67bf3faa5d80d17d81-merged.mount: Deactivated successfully.
Dec 13 07:14:10 compute-0 podman[84720]: 2025-12-13 07:14:10.278914585 +0000 UTC m=+0.281937271 container remove f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:14:10 compute-0 systemd[1]: libpod-conmon-f5d8bd1b35042614e0e682dae1d0017aa391f7a3e0518ff4f6e3c75f5f5c571d.scope: Deactivated successfully.
Dec 13 07:14:10 compute-0 systemd[1]: Reloading.
Dec 13 07:14:10 compute-0 systemd-sysv-generator[84789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:10 compute-0 systemd-rc-local-generator[84783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:10 compute-0 systemd[1]: Reloading.
Dec 13 07:14:10 compute-0 systemd-sysv-generator[84828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:10 compute-0 systemd-rc-local-generator[84825]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:10 compute-0 systemd[1]: Starting Ceph osd.0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:14:11 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.05088436 +0000 UTC m=+0.028288976 container create d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:11 compute-0 ceph-mon[74928]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.104341002 +0000 UTC m=+0.081745618 container init d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.108909761 +0000 UTC m=+0.086314367 container start d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.112409842 +0000 UTC m=+0.089814468 container attach d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.039282337 +0000 UTC m=+0.016686964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 lvm[84979]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:11 compute-0 lvm[84980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:11 compute-0 lvm[84979]: VG ceph_vg0 finished
Dec 13 07:14:11 compute-0 lvm[84980]: VG ceph_vg1 finished
Dec 13 07:14:11 compute-0 lvm[84983]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:11 compute-0 lvm[84983]: VG ceph_vg2 finished
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 bash[84883]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 07:14:11 compute-0 bash[84883]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 07:14:11 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate[84895]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 07:14:11 compute-0 bash[84883]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 07:14:11 compute-0 systemd[1]: libpod-d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e.scope: Deactivated successfully.
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.925754796 +0000 UTC m=+0.903159412 container died d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:14:11 compute-0 systemd[1]: libpod-d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e.scope: Consumed 1.133s CPU time.
Dec 13 07:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b93e04b2a1b6337312b39fbbc810aef378ac51d3bf18186e66f0c421c05cba0c-merged.mount: Deactivated successfully.
Dec 13 07:14:11 compute-0 podman[84883]: 2025-12-13 07:14:11.949347651 +0000 UTC m=+0.926752257 container remove d81c1093e9a8e56fd0a02c3dff4ed9509b79f42968910def74dcb033dde2754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0-activate, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:12 compute-0 podman[85124]: 2025-12-13 07:14:12.094803658 +0000 UTC m=+0.028672427 container create 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89559cb37910374ddd1f527bb0a82bdc91c2d7a0c74c265319fb98ee5101af6/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 podman[85124]: 2025-12-13 07:14:12.145040517 +0000 UTC m=+0.078909286 container init 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:14:12 compute-0 podman[85124]: 2025-12-13 07:14:12.149431763 +0000 UTC m=+0.083300532 container start 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:14:12 compute-0 bash[85124]: 5e169e1385f98bf8a58844e41c31305318f100b9850e1f4defaf308d2b1dfde7
Dec 13 07:14:12 compute-0 podman[85124]: 2025-12-13 07:14:12.083018402 +0000 UTC m=+0.016887171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:12 compute-0 systemd[1]: Started Ceph osd.0 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:14:12 compute-0 ceph-osd[85140]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: pidfile_write: ignore empty --pid-file
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 sudo[84618]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 13 07:14:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 07:14:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:12 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 13 07:14:12 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 sudo[85153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:12 compute-0 sudo[85153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 sudo[85153]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f62211800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 13 07:14:12 compute-0 ceph-osd[85140]: load: jerasure load: lrc 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 sudo[85189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:14:12 compute-0 sudo[85189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount shared_bdev_used = 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Git sha 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DB SUMMARY
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DB Session ID:  ALGEVV9HATHAALWVAQ6X
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                     Options.env: 0x557f62281ea0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                Options.info_log: 0x557f632e88a0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.write_buffer_manager: 0x557f622e2b40
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.row_cache: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                              Options.wal_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.wal_compression: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_background_jobs: 4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Compression algorithms supported:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kZSTD supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f62285a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f62285a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f62285a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 255f2825-be90-45a3-bc3a-4eac136bcf1c
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052436844, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052438192, "job": 1, "event": "recovery_finished"}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: freelist init
Dec 13 07:14:12 compute-0 ceph-osd[85140]: freelist _read_cfg
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs umount
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bdev(0x557f622d9000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluefs mount shared_bdev_used = 27262976
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Git sha 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DB SUMMARY
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DB Session ID:  ALGEVV9HATHAALWVAQ6W
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                     Options.env: 0x557f634baa80
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                Options.info_log: 0x557f632e8a20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.write_buffer_manager: 0x557f622e3900
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.row_cache: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                              Options.wal_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.wal_compression: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_background_jobs: 4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Compression algorithms supported:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kZSTD supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f622858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f62285a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f62285a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557f632e90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557f62285a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 255f2825-be90-45a3-bc3a-4eac136bcf1c
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052483429, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052486215, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "255f2825-be90-45a3-bc3a-4eac136bcf1c", "db_session_id": "ALGEVV9HATHAALWVAQ6W", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052487985, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "255f2825-be90-45a3-bc3a-4eac136bcf1c", "db_session_id": "ALGEVV9HATHAALWVAQ6W", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052489519, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "255f2825-be90-45a3-bc3a-4eac136bcf1c", "db_session_id": "ALGEVV9HATHAALWVAQ6W", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610052491132, "job": 1, "event": "recovery_finished"}
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557f634ce000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: DB pointer 0x557f634a4000
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 13 07:14:12 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:14:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:14:12 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 07:14:12 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 07:14:12 compute-0 ceph-osd[85140]: _get_class not permitted to load lua
Dec 13 07:14:12 compute-0 ceph-osd[85140]: _get_class not permitted to load sdk
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 load_pgs
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 load_pgs opened 0 pgs
Dec 13 07:14:12 compute-0 ceph-osd[85140]: osd.0 0 log_to_monitors true
Dec 13 07:14:12 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0[85136]: 2025-12-13T07:14:12.509+0000 7f1efa0d98c0 -1 osd.0 0 log_to_monitors true
Dec 13 07:14:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 13 07:14:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.607426164 +0000 UTC m=+0.027249853 container create 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 07:14:12 compute-0 systemd[1]: Started libpod-conmon-8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8.scope.
Dec 13 07:14:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.664074565 +0000 UTC m=+0.083898266 container init 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.668956565 +0000 UTC m=+0.088780254 container start 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.669995968 +0000 UTC m=+0.089819659 container attach 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:12 compute-0 vibrant_heyrovsky[85689]: 167 167
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.672830297 +0000 UTC m=+0.092653987 container died 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:14:12 compute-0 systemd[1]: libpod-8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8.scope: Deactivated successfully.
Dec 13 07:14:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87e8cdb50aa6252ffed7d7608533f64fd6b344e4ea5e650a71d1e5efc99e8e7-merged.mount: Deactivated successfully.
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.691808898 +0000 UTC m=+0.111632589 container remove 8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:14:12 compute-0 podman[85676]: 2025-12-13 07:14:12.59642466 +0000 UTC m=+0.016248370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:12 compute-0 systemd[1]: libpod-conmon-8b1829afb6ea68e40ad4152a3ec3f195859368c38bc48fbab865780ffda484a8.scope: Deactivated successfully.
Dec 13 07:14:12 compute-0 podman[85716]: 2025-12-13 07:14:12.870312066 +0000 UTC m=+0.030558042 container create ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:12 compute-0 systemd[1]: Started libpod-conmon-ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e.scope.
Dec 13 07:14:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:12 compute-0 podman[85716]: 2025-12-13 07:14:12.933052001 +0000 UTC m=+0.093297997 container init ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:14:12 compute-0 podman[85716]: 2025-12-13 07:14:12.938642803 +0000 UTC m=+0.098888779 container start ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:14:12 compute-0 podman[85716]: 2025-12-13 07:14:12.939665145 +0000 UTC m=+0.099911121 container attach ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:14:12 compute-0 podman[85716]: 2025-12-13 07:14:12.85941555 +0000 UTC m=+0.019661546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:13 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:13 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test[85731]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 07:14:13 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test[85731]:                             [--no-systemd] [--no-tmpfs]
Dec 13 07:14:13 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test[85731]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 07:14:13 compute-0 systemd[1]: libpod-ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e.scope: Deactivated successfully.
Dec 13 07:14:13 compute-0 podman[85716]: 2025-12-13 07:14:13.098757469 +0000 UTC m=+0.259003445 container died ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-28b09b6645b8706d6d3e486dd7e7fe43b8fc00514ec1a2ff49d48b88d12f5cc9-merged.mount: Deactivated successfully.
Dec 13 07:14:13 compute-0 podman[85716]: 2025-12-13 07:14:13.121146041 +0000 UTC m=+0.281392017 container remove ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 07:14:13 compute-0 systemd[1]: libpod-conmon-ce843830882d285cb0349a5f8ab649894471c6dc4c6d48b6e0d3e65ecd9edd6e.scope: Deactivated successfully.
Dec 13 07:14:13 compute-0 ceph-mon[74928]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 07:14:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:13 compute-0 ceph-mon[74928]: Deploying daemon osd.1 on compute-0
Dec 13 07:14:13 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 13 07:14:13 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 07:14:13 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:13 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:13 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:13 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:13 compute-0 systemd[1]: Reloading.
Dec 13 07:14:13 compute-0 systemd-rc-local-generator[85783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:13 compute-0 systemd-sysv-generator[85787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:13 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 07:14:13 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 07:14:13 compute-0 systemd[1]: Reloading.
Dec 13 07:14:13 compute-0 systemd-sysv-generator[85827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:13 compute-0 systemd-rc-local-generator[85824]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:13 compute-0 systemd[1]: Starting Ceph osd.1 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:14:13 compute-0 podman[85877]: 2025-12-13 07:14:13.9067344 +0000 UTC m=+0.028744924 container create b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:14:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:13 compute-0 podman[85877]: 2025-12-13 07:14:13.948248308 +0000 UTC m=+0.070258832 container init b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:13 compute-0 podman[85877]: 2025-12-13 07:14:13.953236086 +0000 UTC m=+0.075246600 container start b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:14:13 compute-0 podman[85877]: 2025-12-13 07:14:13.954959236 +0000 UTC m=+0.076969750 container attach b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:14:13 compute-0 podman[85877]: 2025-12-13 07:14:13.894345588 +0000 UTC m=+0.016356122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0 done with init, starting boot process
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0 start_boot
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 07:14:14 compute-0 ceph-osd[85140]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 13 07:14:14 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 07:14:14 compute-0 ceph-mon[74928]: osdmap e7: 3 total, 0 up, 3 in
Dec 13 07:14:14 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 07:14:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:14 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:14 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:14 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:14 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3611488797; not ready for session (expect reconnect)
Dec 13 07:14:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:14 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:14 compute-0 lvm[85971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:14 compute-0 lvm[85971]: VG ceph_vg0 finished
Dec 13 07:14:14 compute-0 lvm[85973]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:14 compute-0 lvm[85973]: VG ceph_vg1 finished
Dec 13 07:14:14 compute-0 lvm[85975]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:14 compute-0 lvm[85975]: VG ceph_vg2 finished
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 bash[85877]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 lvm[85977]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:14 compute-0 lvm[85977]: VG ceph_vg2 finished
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 07:14:14 compute-0 bash[85877]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 07:14:14 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate[85889]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 07:14:14 compute-0 bash[85877]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 07:14:14 compute-0 systemd[1]: libpod-b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235.scope: Deactivated successfully.
Dec 13 07:14:14 compute-0 systemd[1]: libpod-b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235.scope: Consumed 1.132s CPU time.
Dec 13 07:14:14 compute-0 podman[85877]: 2025-12-13 07:14:14.813904991 +0000 UTC m=+0.935915505 container died b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 07:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8764e7e1896e1ad689ac466e400826bf8e80526df4d0e7f2862382dd8dce1af0-merged.mount: Deactivated successfully.
Dec 13 07:14:14 compute-0 podman[85877]: 2025-12-13 07:14:14.912084766 +0000 UTC m=+1.034095281 container remove b154fd48736f3e08f17665943c8f20ac3e0e7e0b529e98688c16a073fc15c235 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 07:14:15 compute-0 ceph-mgr[75200]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 07:14:15 compute-0 podman[86126]: 2025-12-13 07:14:15.120844829 +0000 UTC m=+0.088886333 container create c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:15 compute-0 podman[86126]: 2025-12-13 07:14:15.049285543 +0000 UTC m=+0.017327077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e51303f4b9f368d744b6dadce3bbf2364b12d9f150d990d2abdc488ca47952/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:15 compute-0 podman[86126]: 2025-12-13 07:14:15.217156191 +0000 UTC m=+0.185197715 container init c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:15 compute-0 podman[86126]: 2025-12-13 07:14:15.221887065 +0000 UTC m=+0.189928569 container start c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:14:15 compute-0 bash[86126]: c0e0c03f97b0b2b02555f476cf4558ef6f7c2cd731350718d9262d59a0b7be03
Dec 13 07:14:15 compute-0 systemd[1]: Started Ceph osd.1 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:14:15 compute-0 sudo[85189]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:15 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3611488797; not ready for session (expect reconnect)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:15 compute-0 ceph-mon[74928]: from='osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 07:14:15 compute-0 ceph-mon[74928]: osdmap e8: 3 total, 0 up, 3 in
Dec 13 07:14:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:15 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:15 compute-0 ceph-osd[86142]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: pidfile_write: ignore empty --pid-file
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 07:14:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:15 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 13 07:14:15 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 sudo[86154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:15 compute-0 sudo[86154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 sudo[86154]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fdde0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 13 07:14:15 compute-0 sudo[86187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:14:15 compute-0 sudo[86187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: load: jerasure load: lrc 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount shared_bdev_used = 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Git sha 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DB SUMMARY
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DB Session ID:  GQTJY16QVIP0A3829K2S
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                     Options.env: 0x560fdecc3c00
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                Options.info_log: 0x560fdeece900
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.write_buffer_manager: 0x560fded74b40
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.row_cache: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                              Options.wal_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.wal_compression: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_background_jobs: 4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Compression algorithms supported:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kZSTD supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeececc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecece0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecece0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecece0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc03593d-e9d5-4c06-9aa7-16048552921e
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055565082, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055566810, "job": 1, "event": "recovery_finished"}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: freelist init
Dec 13 07:14:15 compute-0 ceph-osd[86142]: freelist _read_cfg
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs umount
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bdev(0x560fddedb000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluefs mount shared_bdev_used = 27262976
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Git sha 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DB SUMMARY
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DB Session ID:  GQTJY16QVIP0A3829K2T
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                     Options.env: 0x560fdf0a0af0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                Options.info_log: 0x560fdeecea80
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.write_buffer_manager: 0x560fded75900
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.row_cache: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                              Options.wal_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.wal_compression: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_background_jobs: 4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Compression algorithms supported:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kZSTD supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecec20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecf120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecf120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fdeecf120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fdde7fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc03593d-e9d5-4c06-9aa7-16048552921e
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055619423, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055621167, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc03593d-e9d5-4c06-9aa7-16048552921e", "db_session_id": "GQTJY16QVIP0A3829K2T", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055622242, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc03593d-e9d5-4c06-9aa7-16048552921e", "db_session_id": "GQTJY16QVIP0A3829K2T", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055625850, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc03593d-e9d5-4c06-9aa7-16048552921e", "db_session_id": "GQTJY16QVIP0A3829K2T", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610055627460, "job": 1, "event": "recovery_finished"}
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560fdf0ea000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: DB pointer 0x560fdf08a000
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 13 07:14:15 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:14:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:14:15 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 07:14:15 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 07:14:15 compute-0 ceph-osd[86142]: _get_class not permitted to load lua
Dec 13 07:14:15 compute-0 ceph-osd[86142]: _get_class not permitted to load sdk
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 load_pgs
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 load_pgs opened 0 pgs
Dec 13 07:14:15 compute-0 ceph-osd[86142]: osd.1 0 log_to_monitors true
Dec 13 07:14:15 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1[86138]: 2025-12-13T07:14:15.666+0000 7fe8c07658c0 -1 osd.1 0 log_to_monitors true
Dec 13 07:14:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 13 07:14:15 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.7546951 +0000 UTC m=+0.032232880 container create 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:14:15 compute-0 systemd[1]: Started libpod-conmon-790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be.scope.
Dec 13 07:14:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.809606627 +0000 UTC m=+0.087144407 container init 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.814720913 +0000 UTC m=+0.092258693 container start 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.817084216 +0000 UTC m=+0.094621996 container attach 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:15 compute-0 wizardly_kirch[86690]: 167 167
Dec 13 07:14:15 compute-0 systemd[1]: libpod-790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be.scope: Deactivated successfully.
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.81825665 +0000 UTC m=+0.095794430 container died 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 07:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d2417f7b8ef51f3bcf5bfc7290bfa50f7d2bfec174801dcef185e108939331-merged.mount: Deactivated successfully.
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.741335433 +0000 UTC m=+0.018873234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:15 compute-0 podman[86677]: 2025-12-13 07:14:15.849494009 +0000 UTC m=+0.127031789 container remove 790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kirch, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:15 compute-0 systemd[1]: libpod-conmon-790f70554662ab953951b7ce61d476c7c80960cebea3b7e06b3d495df7d764be.scope: Deactivated successfully.
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 83.597 iops: 21400.840 elapsed_sec: 0.140
Dec 13 07:14:15 compute-0 ceph-osd[85140]: log_channel(cluster) log [WRN] : OSD bench result of 21400.840372 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 0 waiting for initial osdmap
Dec 13 07:14:15 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0[85136]: 2025-12-13T07:14:15.864+0000 7f1ef605b640 -1 osd.0 0 waiting for initial osdmap
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 set_numa_affinity not setting numa affinity
Dec 13 07:14:15 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-0[85136]: 2025-12-13T07:14:15.880+0000 7f1ef0e60640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 07:14:15 compute-0 ceph-osd[85140]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.021357413 +0000 UTC m=+0.027836576 container create 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:16 compute-0 systemd[1]: Started libpod-conmon-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope.
Dec 13 07:14:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.075364099 +0000 UTC m=+0.081843263 container init 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.080100744 +0000 UTC m=+0.086579907 container start 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.081473305 +0000 UTC m=+0.087952468 container attach 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.010554573 +0000 UTC m=+0.017033736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:16 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test[86730]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 07:14:16 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test[86730]:                             [--no-systemd] [--no-tmpfs]
Dec 13 07:14:16 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test[86730]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 07:14:16 compute-0 systemd[1]: libpod-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope: Deactivated successfully.
Dec 13 07:14:16 compute-0 conmon[86730]: conmon 9dd977d021968012ed2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope/container/memory.events
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.237112807 +0000 UTC m=+0.243591970 container died 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ac7d6d9d6e66d71397d5d817b98aa2554c009181d79754d5602a994873367a-merged.mount: Deactivated successfully.
Dec 13 07:14:16 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3611488797; not ready for session (expect reconnect)
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:16 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 07:14:16 compute-0 podman[86717]: 2025-12-13 07:14:16.259372668 +0000 UTC m=+0.265851831 container remove 9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:16 compute-0 systemd[1]: libpod-conmon-9dd977d021968012ed2f01d49383e21218642515724f0dd9c7d07cd716a29eb7.scope: Deactivated successfully.
Dec 13 07:14:16 compute-0 ceph-mon[74928]: purged_snaps scrub starts
Dec 13 07:14:16 compute-0 ceph-mon[74928]: purged_snaps scrub ok
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: Deploying daemon osd.2 on compute-0
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797] boot
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:16 compute-0 ceph-osd[85140]: osd.0 9 state: booting -> active
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:16 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:16 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:16 compute-0 systemd[1]: Reloading.
Dec 13 07:14:16 compute-0 systemd-sysv-generator[86787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:16 compute-0 systemd-rc-local-generator[86784]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:16 compute-0 systemd[1]: Reloading.
Dec 13 07:14:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 07:14:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 07:14:16 compute-0 systemd-rc-local-generator[86822]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:16 compute-0 systemd-sysv-generator[86825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:16 compute-0 systemd[1]: Starting Ceph osd.2 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:14:16 compute-0 podman[86879]: 2025-12-13 07:14:16.988482273 +0000 UTC m=+0.024662278 container create bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:17 compute-0 podman[86879]: 2025-12-13 07:14:17.042869254 +0000 UTC m=+0.079049259 container init bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:17 compute-0 ceph-mgr[75200]: [devicehealth INFO root] creating mgr pool
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 07:14:17 compute-0 podman[86879]: 2025-12-13 07:14:17.048419598 +0000 UTC m=+0.084599604 container start bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:17 compute-0 podman[86879]: 2025-12-13 07:14:17.051031268 +0000 UTC m=+0.087211294 container attach bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:17 compute-0 podman[86879]: 2025-12-13 07:14:16.978145779 +0000 UTC m=+0.014325794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 07:14:17 compute-0 ceph-mon[74928]: OSD bench result of 21400.840372 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 07:14:17 compute-0 ceph-mon[74928]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 07:14:17 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 07:14:17 compute-0 ceph-mon[74928]: osd.0 [v2:192.168.122.100:6802/3611488797,v1:192.168.122.100:6803/3611488797] boot
Dec 13 07:14:17 compute-0 ceph-mon[74928]: osdmap e9: 3 total, 1 up, 3 in
Dec 13 07:14:17 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 07:14:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 07:14:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0 done with init, starting boot process
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0 start_boot
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 07:14:17 compute-0 ceph-osd[86142]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec 13 07:14:17 compute-0 ceph-osd[85140]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 07:14:17 compute-0 ceph-osd[85140]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 13 07:14:17 compute-0 ceph-osd[85140]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 13 07:14:17 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2712458861; not ready for session (expect reconnect)
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:17 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:17 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 13 07:14:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 07:14:17 compute-0 lvm[86974]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:17 compute-0 lvm[86974]: VG ceph_vg0 finished
Dec 13 07:14:17 compute-0 lvm[86977]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:17 compute-0 lvm[86977]: VG ceph_vg1 finished
Dec 13 07:14:17 compute-0 lvm[86980]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:17 compute-0 lvm[86980]: VG ceph_vg2 finished
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 bash[86879]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 07:14:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 07:14:17 compute-0 bash[86879]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 07:14:17 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate[86891]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 07:14:17 compute-0 bash[86879]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 07:14:17 compute-0 systemd[1]: libpod-bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d.scope: Deactivated successfully.
Dec 13 07:14:17 compute-0 podman[86879]: 2025-12-13 07:14:17.814050527 +0000 UTC m=+0.850230531 container died bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:14:17 compute-0 systemd[1]: libpod-bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d.scope: Consumed 1.018s CPU time.
Dec 13 07:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cacee7e2a09edec88a34da32d488a986046fcf55bf48ce4be8ad173be13aa664-merged.mount: Deactivated successfully.
Dec 13 07:14:17 compute-0 podman[86879]: 2025-12-13 07:14:17.86119458 +0000 UTC m=+0.897374586 container remove bfa99686f93a1d5bb0e489b51cd2e34c3ce32773640f4988bd149a0bd41d4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:18 compute-0 podman[87139]: 2025-12-13 07:14:18.03140086 +0000 UTC m=+0.061555971 container create bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f187ab4b9e239a28c546dd35fd1006ef1c99f0252f30548ee49ab2fe96259030/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 podman[87139]: 2025-12-13 07:14:17.984424351 +0000 UTC m=+0.014579472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:18 compute-0 podman[87139]: 2025-12-13 07:14:18.137793065 +0000 UTC m=+0.167948165 container init bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:14:18 compute-0 podman[87139]: 2025-12-13 07:14:18.142484624 +0000 UTC m=+0.172639725 container start bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:18 compute-0 bash[87139]: bb7cd2f636f6ef6017e815fa4141bf45b494cfb9652486980f5606492505725a
Dec 13 07:14:18 compute-0 systemd[1]: Started Ceph osd.2 for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:14:18 compute-0 ceph-osd[87155]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: pidfile_write: ignore empty --pid-file
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 sudo[86187]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v21: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3dc7800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 sudo[87177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:18 compute-0 sudo[87177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:18 compute-0 sudo[87177]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:18 compute-0 ceph-osd[87155]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 13 07:14:18 compute-0 ceph-osd[87155]: load: jerasure load: lrc 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 sudo[87207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 sudo[87207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 13 07:14:18 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2712458861; not ready for session (expect reconnect)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:18 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:18 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:18 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 07:14:18 compute-0 ceph-mon[74928]: osdmap e10: 3 total, 1 up, 3 in
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount shared_bdev_used = 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Git sha 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DB SUMMARY
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DB Session ID:  DG2TTK5W7R98U96GYMKG
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                     Options.env: 0x5558b3e37ea0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                Options.info_log: 0x5558b4ec28a0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.write_buffer_manager: 0x5558b3e98b40
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.row_cache: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                              Options.wal_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.wal_compression: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_background_jobs: 4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Compression algorithms supported:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kZSTD supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 97da3cf1-8819-480c-976a-60b9e4004bb7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058392205, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058393627, "job": 1, "event": "recovery_finished"}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: freelist init
Dec 13 07:14:18 compute-0 ceph-osd[87155]: freelist _read_cfg
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs umount
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bdev(0x5558b3e8f000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluefs mount shared_bdev_used = 27262976
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: RocksDB version: 7.9.2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Git sha 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DB SUMMARY
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DB Session ID:  DG2TTK5W7R98U96GYMKH
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: CURRENT file:  CURRENT
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.error_if_exists: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.create_if_missing: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                     Options.env: 0x5558b4f627e0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                Options.info_log: 0x5558b4ec2a40
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                              Options.statistics: (nil)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.use_fsync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                              Options.db_log_dir: 
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.write_buffer_manager: 0x5558b3e99900
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.unordered_write: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.row_cache: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                              Options.wal_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.two_write_queues: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.wal_compression: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.atomic_flush: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_background_jobs: 4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_background_compactions: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_subcompactions: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.max_open_files: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Compression algorithms supported:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kZSTD supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kXpressCompression supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kBZip2Compression supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kLZ4Compression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kZlibCompression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         kSnappyCompression supported: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec30c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec30c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:           Options.merge_operator: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5558b4ec30c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5558b3e3ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.compression: LZ4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.num_levels: 7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.bloom_locality: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                               Options.ttl: 2592000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                       Options.enable_blob_files: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                           Options.min_blob_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 97da3cf1-8819-480c-976a-60b9e4004bb7
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058472256, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058475804, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "97da3cf1-8819-480c-976a-60b9e4004bb7", "db_session_id": "DG2TTK5W7R98U96GYMKH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058480216, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "97da3cf1-8819-480c-976a-60b9e4004bb7", "db_session_id": "DG2TTK5W7R98U96GYMKH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058482480, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "97da3cf1-8819-480c-976a-60b9e4004bb7", "db_session_id": "DG2TTK5W7R98U96GYMKH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610058483145, "job": 1, "event": "recovery_finished"}
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5558b50a6000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: DB pointer 0x5558b507e000
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 13 07:14:18 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 13 07:14:18 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 07:14:18 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 07:14:18 compute-0 ceph-osd[87155]: _get_class not permitted to load lua
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:14:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 3 writes, 4 keys, 3 commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 2 writes, 0 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3 writes, 4 keys, 3 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 2 writes, 0 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:14:18 compute-0 ceph-osd[87155]: _get_class not permitted to load sdk
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 load_pgs
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 load_pgs opened 0 pgs
Dec 13 07:14:18 compute-0 ceph-osd[87155]: osd.2 0 log_to_monitors true
Dec 13 07:14:18 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2[87151]: 2025-12-13T07:14:18.495+0000 7f6c509578c0 -1 osd.2 0 log_to_monitors true
Dec 13 07:14:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 13 07:14:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.567421955 +0000 UTC m=+0.026528256 container create 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:18 compute-0 systemd[1]: Started libpod-conmon-6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8.scope.
Dec 13 07:14:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.613921819 +0000 UTC m=+0.073028140 container init 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.61832175 +0000 UTC m=+0.077428052 container start 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.619277608 +0000 UTC m=+0.078383908 container attach 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 07:14:18 compute-0 suspicious_goldwasser[87680]: 167 167
Dec 13 07:14:18 compute-0 systemd[1]: libpod-6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8.scope: Deactivated successfully.
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.622085817 +0000 UTC m=+0.081192118 container died 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-674f3bc22c8b797aa085e3db9b9c775aff8f2d67d76947734cbc0b52319a4007-merged.mount: Deactivated successfully.
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.644103082 +0000 UTC m=+0.103209382 container remove 6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 07:14:18 compute-0 podman[87668]: 2025-12-13 07:14:18.557350149 +0000 UTC m=+0.016456470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:18 compute-0 systemd[1]: libpod-conmon-6b19f4150266ed25e89d4aca3321af0504dcacf9c9abcb0add7046c6464268d8.scope: Deactivated successfully.
Dec 13 07:14:18 compute-0 podman[87703]: 2025-12-13 07:14:18.759134869 +0000 UTC m=+0.028412038 container create 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:18 compute-0 systemd[1]: Started libpod-conmon-104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c.scope.
Dec 13 07:14:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:18 compute-0 podman[87703]: 2025-12-13 07:14:18.812957329 +0000 UTC m=+0.082234498 container init 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:18 compute-0 podman[87703]: 2025-12-13 07:14:18.817493266 +0000 UTC m=+0.086770435 container start 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:18 compute-0 podman[87703]: 2025-12-13 07:14:18.818521499 +0000 UTC m=+0.087798669 container attach 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:18 compute-0 podman[87703]: 2025-12-13 07:14:18.746822982 +0000 UTC m=+0.016100161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 102.095 iops: 26136.199 elapsed_sec: 0.115
Dec 13 07:14:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [WRN] : OSD bench result of 26136.199394 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 0 waiting for initial osdmap
Dec 13 07:14:19 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1[86138]: 2025-12-13T07:14:19.057+0000 7fe8bc6e7640 -1 osd.1 0 waiting for initial osdmap
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 set_numa_affinity not setting numa affinity
Dec 13 07:14:19 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-1[86138]: 2025-12-13T07:14:19.068+0000 7fe8b74ec640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 13 07:14:19 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2712458861; not ready for session (expect reconnect)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:19 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 13 07:14:19 compute-0 ceph-mon[74928]: purged_snaps scrub starts
Dec 13 07:14:19 compute-0 ceph-mon[74928]: purged_snaps scrub ok
Dec 13 07:14:19 compute-0 ceph-mon[74928]: pgmap v21: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 07:14:19 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:19 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 07:14:19 compute-0 ceph-mon[74928]: osdmap e11: 3 total, 1 up, 3 in
Dec 13 07:14:19 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:19 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:19 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 07:14:19 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:19 compute-0 lvm[87791]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:19 compute-0 lvm[87791]: VG ceph_vg0 finished
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861] boot
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:19 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 12 state: booting -> active
Dec 13 07:14:19 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:19 compute-0 lvm[87794]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:19 compute-0 lvm[87794]: VG ceph_vg1 finished
Dec 13 07:14:19 compute-0 lvm[87797]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:19 compute-0 lvm[87797]: VG ceph_vg2 finished
Dec 13 07:14:19 compute-0 lvm[87798]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:19 compute-0 lvm[87798]: VG ceph_vg0 finished
Dec 13 07:14:19 compute-0 friendly_khorana[87717]: {}
Dec 13 07:14:19 compute-0 lvm[87801]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:19 compute-0 lvm[87801]: VG ceph_vg2 finished
Dec 13 07:14:19 compute-0 systemd[1]: libpod-104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c.scope: Deactivated successfully.
Dec 13 07:14:19 compute-0 podman[87703]: 2025-12-13 07:14:19.421842934 +0000 UTC m=+0.691120103 container died 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-268cd73b7ebc14ea92a5f7cbb44b87192cc16ae983dce3338d9be445779cd73e-merged.mount: Deactivated successfully.
Dec 13 07:14:19 compute-0 podman[87703]: 2025-12-13 07:14:19.44739825 +0000 UTC m=+0.716675419 container remove 104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:19 compute-0 lvm[87809]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:19 compute-0 lvm[87809]: VG ceph_vg2 finished
Dec 13 07:14:19 compute-0 systemd[1]: libpod-conmon-104e799054e77dc9fda0f6cdd84955a1898aa5d6542273869a23b5473570145c.scope: Deactivated successfully.
Dec 13 07:14:19 compute-0 sudo[87207]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:19 compute-0 sudo[87813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:14:19 compute-0 sudo[87813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:19 compute-0 sudo[87813]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:19 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 07:14:19 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 07:14:19 compute-0 sudo[87838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:19 compute-0 sudo[87838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:19 compute-0 sudo[87838]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:19 compute-0 sudo[87863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:14:19 compute-0 sudo[87863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:19 compute-0 podman[87924]: 2025-12-13 07:14:19.937579734 +0000 UTC m=+0.040421156 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:20 compute-0 podman[87924]: 2025-12-13 07:14:20.015120706 +0000 UTC m=+0.117962138 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:14:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v24: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 07:14:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 13 07:14:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 07:14:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0 done with init, starting boot process
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0 start_boot
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 07:14:20 compute-0 ceph-osd[87155]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 13 07:14:20 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec 13 07:14:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:20 compute-0 ceph-mon[74928]: OSD bench result of 26136.199394 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 07:14:20 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 07:14:20 compute-0 ceph-mon[74928]: osd.1 [v2:192.168.122.100:6806/2712458861,v1:192.168.122.100:6807/2712458861] boot
Dec 13 07:14:20 compute-0 ceph-mon[74928]: osdmap e12: 3 total, 2 up, 3 in
Dec 13 07:14:20 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 07:14:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 07:14:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:20 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:20 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:20 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2380119328; not ready for session (expect reconnect)
Dec 13 07:14:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:20 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:20 compute-0 sudo[87863]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:20 compute-0 sudo[88047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:20 compute-0 sudo[88047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:20 compute-0 sudo[88047]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:20 compute-0 sudo[88072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- inventory --format=json-pretty --filter-for-batch
Dec 13 07:14:20 compute-0 sudo[88072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.78627064 +0000 UTC m=+0.032005463 container create a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:14:20 compute-0 systemd[1]: Started libpod-conmon-a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae.scope.
Dec 13 07:14:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.845761236 +0000 UTC m=+0.091496059 container init a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.850642284 +0000 UTC m=+0.096377106 container start a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.851894346 +0000 UTC m=+0.097629169 container attach a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:20 compute-0 clever_mclean[88120]: 167 167
Dec 13 07:14:20 compute-0 systemd[1]: libpod-a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae.scope: Deactivated successfully.
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.854178401 +0000 UTC m=+0.099913224 container died a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-115ad9d993227b1bdcfb6134d52231060e8a16bb6bebebdd0ef03e3f6f6b1b87-merged.mount: Deactivated successfully.
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.771869735 +0000 UTC m=+0.017604579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:20 compute-0 podman[88107]: 2025-12-13 07:14:20.87260803 +0000 UTC m=+0.118342853 container remove a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:20 compute-0 systemd[1]: libpod-conmon-a6848fc4b6387853596bddbb04a0e395a3e3330d54fe61cd273ecfb7718aceae.scope: Deactivated successfully.
Dec 13 07:14:20 compute-0 podman[88142]: 2025-12-13 07:14:20.987087059 +0000 UTC m=+0.030340243 container create 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 07:14:21 compute-0 systemd[1]: Started libpod-conmon-4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0.scope.
Dec 13 07:14:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:21 compute-0 podman[88142]: 2025-12-13 07:14:21.037483518 +0000 UTC m=+0.080736712 container init 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:14:21 compute-0 podman[88142]: 2025-12-13 07:14:21.042832654 +0000 UTC m=+0.086085837 container start 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:21 compute-0 podman[88142]: 2025-12-13 07:14:21.044090058 +0000 UTC m=+0.087343262 container attach 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:14:21 compute-0 podman[88142]: 2025-12-13 07:14:20.973174872 +0000 UTC m=+0.016428076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2380119328; not ready for session (expect reconnect)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: pgmap v24: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 07:14:21 compute-0 ceph-mon[74928]: from='osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 07:14:21 compute-0 ceph-mon[74928]: osdmap e13: 3 total, 2 up, 3 in
Dec 13 07:14:21 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:21 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:21 compute-0 trusting_diffie[88155]: [
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:     {
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "available": false,
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "being_replaced": false,
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "ceph_device_lvm": false,
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "lsm_data": {},
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "lvs": [],
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "path": "/dev/sr0",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "rejected_reasons": [
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "Insufficient space (<5GB)",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "Has a FileSystem"
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         ],
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         "sys_api": {
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "actuators": null,
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "device_nodes": [
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:                 "sr0"
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             ],
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "devname": "sr0",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "human_readable_size": "474.00 KB",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "id_bus": "ata",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "model": "QEMU DVD-ROM",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "nr_requests": "64",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "parent": "/dev/sr0",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "partitions": {},
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "path": "/dev/sr0",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "removable": "1",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "rev": "2.5+",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "ro": "0",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "rotational": "1",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "sas_address": "",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "sas_device_handle": "",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "scheduler_mode": "mq-deadline",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "sectors": 0,
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "sectorsize": "2048",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "size": 485376.0,
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "support_discard": "2048",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "type": "disk",
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:             "vendor": "QEMU"
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:         }
Dec 13 07:14:21 compute-0 trusting_diffie[88155]:     }
Dec 13 07:14:21 compute-0 trusting_diffie[88155]: ]
Dec 13 07:14:21 compute-0 systemd[1]: libpod-4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0.scope: Deactivated successfully.
Dec 13 07:14:21 compute-0 podman[88142]: 2025-12-13 07:14:21.439200607 +0000 UTC m=+0.482453801 container died 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-23a2331dbecacd27d98ec61310c1b6bd002afd858b5010e7fd1335031bae80cc-merged.mount: Deactivated successfully.
Dec 13 07:14:21 compute-0 podman[88142]: 2025-12-13 07:14:21.460364545 +0000 UTC m=+0.503617729 container remove 4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 07:14:21 compute-0 systemd[1]: libpod-conmon-4e6f40cc9afcebe8c713055ec15dc02d1be8339b30d3acadfcd040a4f86b5fe0.scope: Deactivated successfully.
Dec 13 07:14:21 compute-0 sudo[88072]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43932k
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43932k
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44986777: error parsing value: Value '44986777' is below minimum 939524096
Dec 13 07:14:21 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44986777: error parsing value: Value '44986777' is below minimum 939524096
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:21 compute-0 sudo[88817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:21 compute-0 sudo[88817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:21 compute-0 sudo[88817]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:21 compute-0 sudo[88842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:14:21 compute-0 sudo[88842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.896728407 +0000 UTC m=+0.031330757 container create 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:21 compute-0 systemd[1]: Started libpod-conmon-3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426.scope.
Dec 13 07:14:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.953396336 +0000 UTC m=+0.087998686 container init 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.958133148 +0000 UTC m=+0.092735498 container start 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.959170939 +0000 UTC m=+0.093773289 container attach 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:21 compute-0 pedantic_turing[88889]: 167 167
Dec 13 07:14:21 compute-0 systemd[1]: libpod-3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426.scope: Deactivated successfully.
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.961769302 +0000 UTC m=+0.096371652 container died 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9159bef3878e74b6dc18d55add062e229faa0b69c9d9d0525d020f868f4223f1-merged.mount: Deactivated successfully.
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.88365166 +0000 UTC m=+0.018254030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:21 compute-0 podman[88876]: 2025-12-13 07:14:21.985741178 +0000 UTC m=+0.120343528 container remove 3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:22 compute-0 systemd[1]: libpod-conmon-3448dae9ceadd1e3bf75f02ce3f97ffb39c4f5b0e234e43c9a78a41e447ff426.scope: Deactivated successfully.
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 103.165 iops: 26410.155 elapsed_sec: 0.114
Dec 13 07:14:22 compute-0 ceph-osd[87155]: log_channel(cluster) log [WRN] : OSD bench result of 26410.155043 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 0 waiting for initial osdmap
Dec 13 07:14:22 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2[87151]: 2025-12-13T07:14:22.092+0000 7f6c4c8d9640 -1 osd.2 0 waiting for initial osdmap
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 set_numa_affinity not setting numa affinity
Dec 13 07:14:22 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-osd-2[87151]: 2025-12-13T07:14:22.107+0000 7f6c476de640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.121050534 +0000 UTC m=+0.033864060 container create 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 07:14:22 compute-0 systemd[1]: Started libpod-conmon-8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8.scope.
Dec 13 07:14:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.181281278 +0000 UTC m=+0.094094815 container init 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.187067774 +0000 UTC m=+0.099881301 container start 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.188072212 +0000 UTC m=+0.100885738 container attach 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.108873266 +0000 UTC m=+0.021686792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:22 compute-0 ceph-mgr[75200]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2380119328; not ready for session (expect reconnect)
Dec 13 07:14:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:22 compute-0 ceph-mgr[75200]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 07:14:22 compute-0 ceph-mon[74928]: purged_snaps scrub starts
Dec 13 07:14:22 compute-0 ceph-mon[74928]: purged_snaps scrub ok
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: osdmap e14: 3 total, 2 up, 3 in
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: Adjusting osd_memory_target on compute-0 to 43932k
Dec 13 07:14:22 compute-0 ceph-mon[74928]: Unable to set osd_memory_target on compute-0 to 44986777: error parsing value: Value '44986777' is below minimum 939524096
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:22 compute-0 quirky_chandrasekhar[88925]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:14:22 compute-0 quirky_chandrasekhar[88925]: --> All data devices are unavailable
Dec 13 07:14:22 compute-0 systemd[1]: libpod-8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8.scope: Deactivated successfully.
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.558572211 +0000 UTC m=+0.471385737 container died 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 13 07:14:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Dec 13 07:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c33e87a03dfce41c163476a17775478e3803d0f65a764c9dbd33138215c4cc36-merged.mount: Deactivated successfully.
Dec 13 07:14:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328] boot
Dec 13 07:14:22 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Dec 13 07:14:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 07:14:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:22 compute-0 ceph-osd[87155]: osd.2 15 state: booting -> active
Dec 13 07:14:22 compute-0 podman[88911]: 2025-12-13 07:14:22.582637763 +0000 UTC m=+0.495451289 container remove 8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chandrasekhar, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:22 compute-0 systemd[1]: libpod-conmon-8cdad2bf0d0133433f1494bb20e6160dfcfb2ff8369a01c00998926ffb8aa3d8.scope: Deactivated successfully.
Dec 13 07:14:22 compute-0 sudo[88842]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:22 compute-0 sudo[88956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:22 compute-0 sudo[88956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:22 compute-0 sudo[88956]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:22 compute-0 sudo[88981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:14:22 compute-0 sudo[88981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:22 compute-0 podman[89016]: 2025-12-13 07:14:22.925591659 +0000 UTC m=+0.028777278 container create 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:14:22 compute-0 systemd[1]: Started libpod-conmon-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope.
Dec 13 07:14:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:22 compute-0 podman[89016]: 2025-12-13 07:14:22.97408905 +0000 UTC m=+0.077274680 container init 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:14:22 compute-0 podman[89016]: 2025-12-13 07:14:22.978958693 +0000 UTC m=+0.082144302 container start 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:22 compute-0 podman[89016]: 2025-12-13 07:14:22.979945157 +0000 UTC m=+0.083130766 container attach 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:22 compute-0 elastic_banach[89029]: 167 167
Dec 13 07:14:22 compute-0 systemd[1]: libpod-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope: Deactivated successfully.
Dec 13 07:14:22 compute-0 conmon[89029]: conmon 2289be039e66d01ad22d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope/container/memory.events
Dec 13 07:14:22 compute-0 podman[89016]: 2025-12-13 07:14:22.983046196 +0000 UTC m=+0.086231805 container died 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e05c93f0dd4e8c97f4f862216e32f95434ebb8ab9c1073283838b4de6b6d320-merged.mount: Deactivated successfully.
Dec 13 07:14:23 compute-0 podman[89016]: 2025-12-13 07:14:23.010429433 +0000 UTC m=+0.113615042 container remove 2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:23 compute-0 podman[89016]: 2025-12-13 07:14:22.913992759 +0000 UTC m=+0.017178388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:23 compute-0 systemd[1]: libpod-conmon-2289be039e66d01ad22d93a9946ac1298f4cfec2d97b51910274d67b22398b41.scope: Deactivated successfully.
Dec 13 07:14:23 compute-0 podman[89050]: 2025-12-13 07:14:23.125321485 +0000 UTC m=+0.029495248 container create a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 07:14:23 compute-0 systemd[1]: Started libpod-conmon-a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979.scope.
Dec 13 07:14:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:23 compute-0 podman[89050]: 2025-12-13 07:14:23.186892277 +0000 UTC m=+0.091066040 container init a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:14:23 compute-0 podman[89050]: 2025-12-13 07:14:23.192285794 +0000 UTC m=+0.096459557 container start a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 07:14:23 compute-0 podman[89050]: 2025-12-13 07:14:23.193322012 +0000 UTC m=+0.097495775 container attach a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:14:23 compute-0 ceph-mgr[75200]: [devicehealth INFO root] creating main.db for devicehealth
Dec 13 07:14:23 compute-0 podman[89050]: 2025-12-13 07:14:23.112658975 +0000 UTC m=+0.016832758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:23 compute-0 ceph-mgr[75200]: [devicehealth INFO root] Check health
Dec 13 07:14:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 07:14:23 compute-0 sudo[89080]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 13 07:14:23 compute-0 sudo[89080]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 07:14:23 compute-0 sudo[89080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 13 07:14:23 compute-0 sudo[89080]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 07:14:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 07:14:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 07:14:23 compute-0 ceph-mon[74928]: OSD bench result of 26410.155043 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 07:14:23 compute-0 ceph-mon[74928]: pgmap v27: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 13 07:14:23 compute-0 ceph-mon[74928]: osd.2 [v2:192.168.122.100:6810/2380119328,v1:192.168.122.100:6811/2380119328] boot
Dec 13 07:14:23 compute-0 ceph-mon[74928]: osdmap e15: 3 total, 3 up, 3 in
Dec 13 07:14:23 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 07:14:23 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 07:14:23 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 07:14:23 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 07:14:23 compute-0 nervous_black[89063]: {
Dec 13 07:14:23 compute-0 nervous_black[89063]:     "0": [
Dec 13 07:14:23 compute-0 nervous_black[89063]:         {
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "devices": [
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "/dev/loop3"
Dec 13 07:14:23 compute-0 nervous_black[89063]:             ],
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_name": "ceph_lv0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_size": "21470642176",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "name": "ceph_lv0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "tags": {
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.crush_device_class": "",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.encrypted": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osd_id": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.type": "block",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.vdo": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.with_tpm": "0"
Dec 13 07:14:23 compute-0 nervous_black[89063]:             },
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "type": "block",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "vg_name": "ceph_vg0"
Dec 13 07:14:23 compute-0 nervous_black[89063]:         }
Dec 13 07:14:23 compute-0 nervous_black[89063]:     ],
Dec 13 07:14:23 compute-0 nervous_black[89063]:     "1": [
Dec 13 07:14:23 compute-0 nervous_black[89063]:         {
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "devices": [
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "/dev/loop4"
Dec 13 07:14:23 compute-0 nervous_black[89063]:             ],
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_name": "ceph_lv1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_size": "21470642176",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "name": "ceph_lv1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "tags": {
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.crush_device_class": "",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.encrypted": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osd_id": "1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.type": "block",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.vdo": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.with_tpm": "0"
Dec 13 07:14:23 compute-0 nervous_black[89063]:             },
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "type": "block",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "vg_name": "ceph_vg1"
Dec 13 07:14:23 compute-0 nervous_black[89063]:         }
Dec 13 07:14:23 compute-0 nervous_black[89063]:     ],
Dec 13 07:14:23 compute-0 nervous_black[89063]:     "2": [
Dec 13 07:14:23 compute-0 nervous_black[89063]:         {
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "devices": [
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "/dev/loop5"
Dec 13 07:14:23 compute-0 nervous_black[89063]:             ],
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_name": "ceph_lv2",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_size": "21470642176",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "name": "ceph_lv2",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "tags": {
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.crush_device_class": "",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.encrypted": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osd_id": "2",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.type": "block",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.vdo": "0",
Dec 13 07:14:23 compute-0 nervous_black[89063]:                 "ceph.with_tpm": "0"
Dec 13 07:14:23 compute-0 nervous_black[89063]:             },
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "type": "block",
Dec 13 07:14:23 compute-0 nervous_black[89063]:             "vg_name": "ceph_vg2"
Dec 13 07:14:23 compute-0 nervous_black[89063]:         }
Dec 13 07:14:23 compute-0 nervous_black[89063]:     ]
Dec 13 07:14:23 compute-0 nervous_black[89063]: }
Dec 13 07:14:23 compute-0 systemd[1]: libpod-a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979.scope: Deactivated successfully.
Dec 13 07:14:23 compute-0 podman[89087]: 2025-12-13 07:14:23.467776433 +0000 UTC m=+0.017718704 container died a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1850988413cdb4e1dca8088a95e26d961e7d737719ec20915d39c82c2260b099-merged.mount: Deactivated successfully.
Dec 13 07:14:23 compute-0 podman[89087]: 2025-12-13 07:14:23.488046881 +0000 UTC m=+0.037989132 container remove a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:23 compute-0 systemd[1]: libpod-conmon-a656fe7e43aa080dc5035a558c4a4f491c4049a9d2b38fd7786d0fd16a3a4979.scope: Deactivated successfully.
Dec 13 07:14:23 compute-0 sudo[88981]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:23 compute-0 sudo[89099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:23 compute-0 sudo[89099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:23 compute-0 sudo[89099]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:23 compute-0 sudo[89124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:14:23 compute-0 sudo[89124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.831605655 +0000 UTC m=+0.028791975 container create e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:14:23 compute-0 systemd[1]: Started libpod-conmon-e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57.scope.
Dec 13 07:14:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.89052272 +0000 UTC m=+0.087709061 container init e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.896012838 +0000 UTC m=+0.093199160 container start e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.897378215 +0000 UTC m=+0.094564556 container attach e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:14:23 compute-0 relaxed_montalcini[89172]: 167 167
Dec 13 07:14:23 compute-0 systemd[1]: libpod-e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57.scope: Deactivated successfully.
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.900112424 +0000 UTC m=+0.097298745 container died e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.819597385 +0000 UTC m=+0.016783726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:23 compute-0 podman[89159]: 2025-12-13 07:14:23.91861017 +0000 UTC m=+0.115796492 container remove e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 07:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a200cc219a7b0b34a2fdb631046a5bf4f13ddc9265ff69e2903d34bad7dd913-merged.mount: Deactivated successfully.
Dec 13 07:14:23 compute-0 systemd[1]: libpod-conmon-e3d7a41e2b73d8e771775f331a60c383bce470e43881da06e942f13d7a5d9f57.scope: Deactivated successfully.
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.036363168 +0000 UTC m=+0.028681849 container create 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:24 compute-0 systemd[1]: Started libpod-conmon-1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e.scope.
Dec 13 07:14:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.093765697 +0000 UTC m=+0.086084387 container init 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.100893785 +0000 UTC m=+0.093212475 container start 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.102109139 +0000 UTC m=+0.094427819 container attach 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.024848767 +0000 UTC m=+0.017167457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 creating+peering; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Dec 13 07:14:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 13 07:14:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qsherl(active, since 46s)
Dec 13 07:14:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec 13 07:14:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec 13 07:14:24 compute-0 lvm[89285]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:24 compute-0 lvm[89284]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:24 compute-0 lvm[89284]: VG ceph_vg0 finished
Dec 13 07:14:24 compute-0 lvm[89285]: VG ceph_vg1 finished
Dec 13 07:14:24 compute-0 lvm[89288]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:24 compute-0 lvm[89288]: VG ceph_vg2 finished
Dec 13 07:14:24 compute-0 priceless_hugle[89207]: {}
Dec 13 07:14:24 compute-0 systemd[1]: libpod-1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e.scope: Deactivated successfully.
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.745906876 +0000 UTC m=+0.738225566 container died 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 07:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-43a43f7adaafb9528fd0f1d7970bdf88ecb17bce41ad69c547b7092f6931814a-merged.mount: Deactivated successfully.
Dec 13 07:14:24 compute-0 podman[89194]: 2025-12-13 07:14:24.770124042 +0000 UTC m=+0.762442722 container remove 1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 07:14:24 compute-0 systemd[1]: libpod-conmon-1982c46dcb91f5e361ddf958a3d25d471b5c727a13991ee05ea161970f4bd59e.scope: Deactivated successfully.
Dec 13 07:14:24 compute-0 sudo[89124]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:24 compute-0 sudo[89299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:14:24 compute-0 sudo[89299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:24 compute-0 sudo[89299]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:25 compute-0 ceph-mon[74928]: pgmap v29: 1 pgs: 1 creating+peering; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Dec 13 07:14:25 compute-0 ceph-mon[74928]: mgrmap e9: compute-0.qsherl(active, since 46s)
Dec 13 07:14:25 compute-0 ceph-mon[74928]: osdmap e16: 3 total, 3 up, 3 in
Dec 13 07:14:25 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:25 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:27 compute-0 ceph-mon[74928]: pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:29 compute-0 ceph-mon[74928]: pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:31 compute-0 ceph-mon[74928]: pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:31 compute-0 sudo[89347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mamrliyednbytcgzweeohvklxmqfesau ; /usr/bin/python3'
Dec 13 07:14:31 compute-0 sudo[89347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:32 compute-0 python3[89349]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.097564653 +0000 UTC m=+0.028045733 container create 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:32 compute-0 systemd[1]: Started libpod-conmon-2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f.scope.
Dec 13 07:14:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.151058957 +0000 UTC m=+0.081540037 container init 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.157135828 +0000 UTC m=+0.087616908 container start 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.15814765 +0000 UTC m=+0.088628730 container attach 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.086707598 +0000 UTC m=+0.017188698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 07:14:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520977248' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:14:32 compute-0 dazzling_ellis[89364]: 
Dec 13 07:14:32 compute-0 dazzling_ellis[89364]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":69,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1765610062,"num_in_osds":3,"osd_in_since":1765610047,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502935552,"bytes_avail":63908990976,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2025-12-13T07:13:21:319345+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-13T07:13:21.320643+0000","services":{}},"progress_events":{}}
Dec 13 07:14:32 compute-0 systemd[1]: libpod-2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f.scope: Deactivated successfully.
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.558694553 +0000 UTC m=+0.489175644 container died 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b4addaf2f58c21e4a038c87e13d5f812e5dffb449bf7d7af8f0f51d99536d9-merged.mount: Deactivated successfully.
Dec 13 07:14:32 compute-0 podman[89351]: 2025-12-13 07:14:32.579558336 +0000 UTC m=+0.510039417 container remove 2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f (image=quay.io/ceph/ceph:v20, name=dazzling_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:14:32 compute-0 systemd[1]: libpod-conmon-2579e970aaad26823417682fcaa9702df67c425ffa752a9b6a2357b0fb8c292f.scope: Deactivated successfully.
Dec 13 07:14:32 compute-0 sudo[89347]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:32 compute-0 sudo[89421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcprbprglxhinxrcbdzlzqoyiwpjlerp ; /usr/bin/python3'
Dec 13 07:14:32 compute-0 sudo[89421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:32 compute-0 python3[89423]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:32 compute-0 podman[89424]: 2025-12-13 07:14:32.979122013 +0000 UTC m=+0.030890700 container create 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:33 compute-0 systemd[1]: Started libpod-conmon-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope.
Dec 13 07:14:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4b3b0d99c81aff7dca681f98332f9814f628e47aa3b368079aa704c4053938/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4b3b0d99c81aff7dca681f98332f9814f628e47aa3b368079aa704c4053938/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:33 compute-0 podman[89424]: 2025-12-13 07:14:33.029284763 +0000 UTC m=+0.081053452 container init 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:33 compute-0 podman[89424]: 2025-12-13 07:14:33.033802715 +0000 UTC m=+0.085571404 container start 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:33 compute-0 podman[89424]: 2025-12-13 07:14:33.035005626 +0000 UTC m=+0.086774314 container attach 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:33 compute-0 podman[89424]: 2025-12-13 07:14:32.968507772 +0000 UTC m=+0.020276470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 07:14:33 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 13 07:14:33 compute-0 ceph-mon[74928]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:33 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/520977248' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:14:33 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:33 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 13 07:14:33 compute-0 pedantic_chatterjee[89436]: pool 'vms' created
Dec 13 07:14:33 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 13 07:14:33 compute-0 systemd[1]: libpod-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope: Deactivated successfully.
Dec 13 07:14:33 compute-0 conmon[89436]: conmon 4b25ada4cbe0bf391cd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope/container/memory.events
Dec 13 07:14:33 compute-0 podman[89424]: 2025-12-13 07:14:33.424399186 +0000 UTC m=+0.476167874 container died 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b4b3b0d99c81aff7dca681f98332f9814f628e47aa3b368079aa704c4053938-merged.mount: Deactivated successfully.
Dec 13 07:14:33 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:33 compute-0 podman[89424]: 2025-12-13 07:14:33.44395923 +0000 UTC m=+0.495727918 container remove 4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25 (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:14:33 compute-0 systemd[1]: libpod-conmon-4b25ada4cbe0bf391cd078b863636dee15f410b7189a2cedf1e07dbcefc12c25.scope: Deactivated successfully.
Dec 13 07:14:33 compute-0 sudo[89421]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:33 compute-0 sudo[89495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgxxxhskchlnmfpskltkfmkahcidnqoa ; /usr/bin/python3'
Dec 13 07:14:33 compute-0 sudo[89495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:33 compute-0 python3[89497]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:33 compute-0 podman[89498]: 2025-12-13 07:14:33.699189038 +0000 UTC m=+0.026360756 container create bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:33 compute-0 systemd[1]: Started libpod-conmon-bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349.scope.
Dec 13 07:14:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2412ac98e034676ee508b9475961ecc707091a91bffe7a6a754c3d4fa0fd86e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2412ac98e034676ee508b9475961ecc707091a91bffe7a6a754c3d4fa0fd86e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:33 compute-0 podman[89498]: 2025-12-13 07:14:33.748951827 +0000 UTC m=+0.076123545 container init bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:33 compute-0 podman[89498]: 2025-12-13 07:14:33.752984337 +0000 UTC m=+0.080156044 container start bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:33 compute-0 podman[89498]: 2025-12-13 07:14:33.754115473 +0000 UTC m=+0.081287181 container attach bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:14:33 compute-0 podman[89498]: 2025-12-13 07:14:33.688913595 +0000 UTC m=+0.016085323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 07:14:34 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v36: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 13 07:14:34 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 13 07:14:34 compute-0 xenodochial_maxwell[89510]: pool 'volumes' created
Dec 13 07:14:34 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 13 07:14:34 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:34 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1627129991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:34 compute-0 ceph-mon[74928]: osdmap e17: 3 total, 3 up, 3 in
Dec 13 07:14:34 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:34 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:34 compute-0 systemd[1]: libpod-bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349.scope: Deactivated successfully.
Dec 13 07:14:34 compute-0 podman[89498]: 2025-12-13 07:14:34.428710734 +0000 UTC m=+0.755882452 container died bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2412ac98e034676ee508b9475961ecc707091a91bffe7a6a754c3d4fa0fd86e-merged.mount: Deactivated successfully.
Dec 13 07:14:34 compute-0 podman[89498]: 2025-12-13 07:14:34.447999737 +0000 UTC m=+0.775171445 container remove bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349 (image=quay.io/ceph/ceph:v20, name=xenodochial_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:34 compute-0 systemd[1]: libpod-conmon-bb91b3475c91c5a16f5730ba8a374632d23ff6df6948e8bc52f7a49d56fe3349.scope: Deactivated successfully.
Dec 13 07:14:34 compute-0 sudo[89495]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:34 compute-0 sudo[89569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmysfaeguakhntmteqpnhanrighqorno ; /usr/bin/python3'
Dec 13 07:14:34 compute-0 sudo[89569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:34 compute-0 python3[89571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:34 compute-0 podman[89572]: 2025-12-13 07:14:34.702414783 +0000 UTC m=+0.029594093 container create 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:34 compute-0 systemd[1]: Started libpod-conmon-498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1.scope.
Dec 13 07:14:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2858ab58be0a1f66ec601d1b31c498137e8712d66d210a06cda28af62f6f59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2858ab58be0a1f66ec601d1b31c498137e8712d66d210a06cda28af62f6f59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:34 compute-0 podman[89572]: 2025-12-13 07:14:34.762516095 +0000 UTC m=+0.089695405 container init 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:14:34 compute-0 podman[89572]: 2025-12-13 07:14:34.766787995 +0000 UTC m=+0.093967294 container start 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:34 compute-0 podman[89572]: 2025-12-13 07:14:34.767841334 +0000 UTC m=+0.095020634 container attach 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:34 compute-0 podman[89572]: 2025-12-13 07:14:34.690328187 +0000 UTC m=+0.017507487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 07:14:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 13 07:14:35 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 13 07:14:35 compute-0 angry_goldstine[89584]: pool 'backups' created
Dec 13 07:14:35 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 13 07:14:35 compute-0 ceph-mon[74928]: pgmap v36: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2508690959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:35 compute-0 ceph-mon[74928]: osdmap e18: 3 total, 3 up, 3 in
Dec 13 07:14:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1452164855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:35 compute-0 ceph-mon[74928]: osdmap e19: 3 total, 3 up, 3 in
Dec 13 07:14:35 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:35 compute-0 systemd[1]: libpod-498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1.scope: Deactivated successfully.
Dec 13 07:14:35 compute-0 podman[89572]: 2025-12-13 07:14:35.430947681 +0000 UTC m=+0.758126971 container died 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec2858ab58be0a1f66ec601d1b31c498137e8712d66d210a06cda28af62f6f59-merged.mount: Deactivated successfully.
Dec 13 07:14:35 compute-0 podman[89572]: 2025-12-13 07:14:35.448111811 +0000 UTC m=+0.775291111 container remove 498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1 (image=quay.io/ceph/ceph:v20, name=angry_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:14:35 compute-0 sudo[89569]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:35 compute-0 systemd[1]: libpod-conmon-498da08979393cb4104be64c3e078ba0fd547d3efa84f45be086ea79248804d1.scope: Deactivated successfully.
Dec 13 07:14:35 compute-0 sudo[89644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmlmxxgeqfwiqsvoasvulunhkihqetos ; /usr/bin/python3'
Dec 13 07:14:35 compute-0 sudo[89644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:35 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:35 compute-0 python3[89646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:35 compute-0 podman[89647]: 2025-12-13 07:14:35.701165359 +0000 UTC m=+0.027985741 container create 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:35 compute-0 systemd[1]: Started libpod-conmon-284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e.scope.
Dec 13 07:14:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19785ea1c96cbb045bb91b6890c6176886e0b919ef7edb32ced8108f0f64a9b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19785ea1c96cbb045bb91b6890c6176886e0b919ef7edb32ced8108f0f64a9b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:35 compute-0 podman[89647]: 2025-12-13 07:14:35.757619785 +0000 UTC m=+0.084440167 container init 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:14:35 compute-0 podman[89647]: 2025-12-13 07:14:35.762520417 +0000 UTC m=+0.089340809 container start 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:35 compute-0 podman[89647]: 2025-12-13 07:14:35.763548368 +0000 UTC m=+0.090368750 container attach 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 07:14:35 compute-0 podman[89647]: 2025-12-13 07:14:35.690345282 +0000 UTC m=+0.017165664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 07:14:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v39: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 13 07:14:36 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 13 07:14:36 compute-0 ecstatic_lamport[89659]: pool 'images' created
Dec 13 07:14:36 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 13 07:14:36 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:36 compute-0 systemd[1]: libpod-284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e.scope: Deactivated successfully.
Dec 13 07:14:36 compute-0 podman[89647]: 2025-12-13 07:14:36.438641774 +0000 UTC m=+0.765462157 container died 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-19785ea1c96cbb045bb91b6890c6176886e0b919ef7edb32ced8108f0f64a9b9-merged.mount: Deactivated successfully.
Dec 13 07:14:36 compute-0 podman[89647]: 2025-12-13 07:14:36.45434581 +0000 UTC m=+0.781166183 container remove 284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e (image=quay.io/ceph/ceph:v20, name=ecstatic_lamport, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:14:36 compute-0 systemd[1]: libpod-conmon-284be59bf7ecb16e0c5c3b6e1765c6bf129f5ee12bd6ee98d88431958821f29e.scope: Deactivated successfully.
Dec 13 07:14:36 compute-0 sudo[89644]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:36 compute-0 sudo[89719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkklxiatbabqrfnddfnrhbznimgmatrk ; /usr/bin/python3'
Dec 13 07:14:36 compute-0 sudo[89719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:36 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:36 compute-0 python3[89721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:36 compute-0 podman[89722]: 2025-12-13 07:14:36.712186276 +0000 UTC m=+0.025871116 container create 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:36 compute-0 systemd[1]: Started libpod-conmon-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope.
Dec 13 07:14:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552e722051a369f549e5d7535eb8c2fdbd5e793ac08331177c52f102003e3f18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552e722051a369f549e5d7535eb8c2fdbd5e793ac08331177c52f102003e3f18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:36 compute-0 podman[89722]: 2025-12-13 07:14:36.763368553 +0000 UTC m=+0.077053393 container init 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:36 compute-0 podman[89722]: 2025-12-13 07:14:36.767177542 +0000 UTC m=+0.080862383 container start 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec 13 07:14:36 compute-0 podman[89722]: 2025-12-13 07:14:36.768198201 +0000 UTC m=+0.081883042 container attach 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:14:36 compute-0 podman[89722]: 2025-12-13 07:14:36.702042019 +0000 UTC m=+0.015726880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 07:14:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 13 07:14:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 13 07:14:37 compute-0 keen_knuth[89734]: pool 'cephfs.cephfs.meta' created
Dec 13 07:14:37 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 13 07:14:37 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:37 compute-0 ceph-mon[74928]: pgmap v39: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3374455308' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:37 compute-0 ceph-mon[74928]: osdmap e20: 3 total, 3 up, 3 in
Dec 13 07:14:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:37 compute-0 systemd[1]: libpod-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope: Deactivated successfully.
Dec 13 07:14:37 compute-0 conmon[89734]: conmon 405b07271b4ca84efb67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope/container/memory.events
Dec 13 07:14:37 compute-0 podman[89722]: 2025-12-13 07:14:37.443768935 +0000 UTC m=+0.757453775 container died 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-552e722051a369f549e5d7535eb8c2fdbd5e793ac08331177c52f102003e3f18-merged.mount: Deactivated successfully.
Dec 13 07:14:37 compute-0 podman[89722]: 2025-12-13 07:14:37.462482477 +0000 UTC m=+0.776167318 container remove 405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b (image=quay.io/ceph/ceph:v20, name=keen_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:37 compute-0 systemd[1]: libpod-conmon-405b07271b4ca84efb676cc76f6644929b99da7af8415db77eda15df652b733b.scope: Deactivated successfully.
Dec 13 07:14:37 compute-0 sudo[89719]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:37 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:37 compute-0 sudo[89795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuokbpklrmustecmszxrsmribyinkzjm ; /usr/bin/python3'
Dec 13 07:14:37 compute-0 sudo[89795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:37 compute-0 python3[89797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:37 compute-0 podman[89798]: 2025-12-13 07:14:37.718139478 +0000 UTC m=+0.027889450 container create 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:37 compute-0 systemd[1]: Started libpod-conmon-2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a.scope.
Dec 13 07:14:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79858012b8f09bc143436e83dcd7139c918ee35d869b81d016329c0b112e9f5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79858012b8f09bc143436e83dcd7139c918ee35d869b81d016329c0b112e9f5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:37 compute-0 podman[89798]: 2025-12-13 07:14:37.771492285 +0000 UTC m=+0.081242278 container init 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:14:37 compute-0 podman[89798]: 2025-12-13 07:14:37.781302674 +0000 UTC m=+0.091052646 container start 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:37 compute-0 podman[89798]: 2025-12-13 07:14:37.784215409 +0000 UTC m=+0.093965380 container attach 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec 13 07:14:37 compute-0 podman[89798]: 2025-12-13 07:14:37.706629495 +0000 UTC m=+0.016379487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 07:14:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v42: 6 pgs: 1 creating+peering, 2 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:14:38
Dec 13 07:14:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:14:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Dec 13 07:14:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 13 07:14:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 13 07:14:38 compute-0 youthful_buck[89811]: pool 'cephfs.cephfs.data' created
Dec 13 07:14:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 13 07:14:38 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1251626788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:38 compute-0 ceph-mon[74928]: osdmap e21: 3 total, 3 up, 3 in
Dec 13 07:14:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 07:14:38 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:38 compute-0 systemd[1]: libpod-2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a.scope: Deactivated successfully.
Dec 13 07:14:38 compute-0 podman[89798]: 2025-12-13 07:14:38.460036533 +0000 UTC m=+0.769786505 container died 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 07:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-79858012b8f09bc143436e83dcd7139c918ee35d869b81d016329c0b112e9f5d-merged.mount: Deactivated successfully.
Dec 13 07:14:38 compute-0 podman[89798]: 2025-12-13 07:14:38.482326166 +0000 UTC m=+0.792076139 container remove 2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a (image=quay.io/ceph/ceph:v20, name=youthful_buck, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:38 compute-0 systemd[1]: libpod-conmon-2ed195410360f11343614c280fe56d3a90316c83bfb5a17c4a9517415d2cfb1a.scope: Deactivated successfully.
Dec 13 07:14:38 compute-0 sudo[89795]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:38 compute-0 sudo[89871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuozinzsidotrujmvapbqlfvrzmzkllt ; /usr/bin/python3'
Dec 13 07:14:38 compute-0 sudo[89871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:38 compute-0 python3[89873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:38 compute-0 podman[89874]: 2025-12-13 07:14:38.764160607 +0000 UTC m=+0.026957588 container create 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:14:38 compute-0 systemd[1]: Started libpod-conmon-2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88.scope.
Dec 13 07:14:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c6e9760e05eb70600c53bad02deb0155b7dea505b83fb76134e1891f743163/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c6e9760e05eb70600c53bad02deb0155b7dea505b83fb76134e1891f743163/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:38 compute-0 podman[89874]: 2025-12-13 07:14:38.827237983 +0000 UTC m=+0.090034982 container init 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:38 compute-0 podman[89874]: 2025-12-13 07:14:38.831379928 +0000 UTC m=+0.094176907 container start 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:38 compute-0 podman[89874]: 2025-12-13 07:14:38.832479514 +0000 UTC m=+0.095276494 container attach 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:14:38 compute-0 podman[89874]: 2025-12-13 07:14:38.75374899 +0000 UTC m=+0.016545989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:14:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:14:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:14:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 13 07:14:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 07:14:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 13 07:14:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 07:14:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 13 07:14:39 compute-0 nifty_kapitsa[89887]: enabled application 'rbd' on pool 'vms'
Dec 13 07:14:39 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 13 07:14:39 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 3d02a20f-ab72-4b9e-89db-75b6b986c2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 07:14:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:14:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:39 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:39 compute-0 ceph-mon[74928]: pgmap v42: 6 pgs: 1 creating+peering, 2 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:39 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2902257096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 07:14:39 compute-0 ceph-mon[74928]: osdmap e22: 3 total, 3 up, 3 in
Dec 13 07:14:39 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:39 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 07:14:39 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:39 compute-0 systemd[1]: libpod-2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88.scope: Deactivated successfully.
Dec 13 07:14:39 compute-0 podman[89874]: 2025-12-13 07:14:39.461371073 +0000 UTC m=+0.724168053 container died 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9c6e9760e05eb70600c53bad02deb0155b7dea505b83fb76134e1891f743163-merged.mount: Deactivated successfully.
Dec 13 07:14:39 compute-0 podman[89874]: 2025-12-13 07:14:39.479540434 +0000 UTC m=+0.742337413 container remove 2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88 (image=quay.io/ceph/ceph:v20, name=nifty_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:14:39 compute-0 sudo[89871]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:39 compute-0 systemd[1]: libpod-conmon-2e98911228065af35e96bd97e53842dea5e09f6c8b06905558c0f30732a24e88.scope: Deactivated successfully.
Dec 13 07:14:39 compute-0 sudo[89945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jebqizgehxuxgpicfyydlbvmmmpbitcq ; /usr/bin/python3'
Dec 13 07:14:39 compute-0 sudo[89945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:39 compute-0 python3[89947]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:39 compute-0 podman[89948]: 2025-12-13 07:14:39.734285751 +0000 UTC m=+0.028089667 container create 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:39 compute-0 systemd[1]: Started libpod-conmon-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope.
Dec 13 07:14:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3963b6b96fb103b955b721fe7c90da14278ab054b9f24f6879fa55de43ef129/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3963b6b96fb103b955b721fe7c90da14278ab054b9f24f6879fa55de43ef129/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:39 compute-0 podman[89948]: 2025-12-13 07:14:39.777921064 +0000 UTC m=+0.071725010 container init 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 07:14:39 compute-0 podman[89948]: 2025-12-13 07:14:39.782096532 +0000 UTC m=+0.075900458 container start 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 07:14:39 compute-0 podman[89948]: 2025-12-13 07:14:39.783200968 +0000 UTC m=+0.077004893 container attach 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:14:39 compute-0 podman[89948]: 2025-12-13 07:14:39.722994129 +0000 UTC m=+0.016798054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 07:14:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v45: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 13 07:14:40 compute-0 ecstatic_allen[89960]: enabled application 'rbd' on pool 'volumes'
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 13 07:14:40 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev b94c178d-fa47-4df1-8a52-3fbb7d542575 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 07:14:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:14:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2315924078' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: osdmap e23: 3 total, 3 up, 3 in
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4080177198' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:40 compute-0 ceph-mon[74928]: osdmap e24: 3 total, 3 up, 3 in
Dec 13 07:14:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:40 compute-0 systemd[1]: libpod-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope: Deactivated successfully.
Dec 13 07:14:40 compute-0 conmon[89960]: conmon 2bcd13a6956644982eb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope/container/memory.events
Dec 13 07:14:40 compute-0 podman[89948]: 2025-12-13 07:14:40.465052166 +0000 UTC m=+0.758856102 container died 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3963b6b96fb103b955b721fe7c90da14278ab054b9f24f6879fa55de43ef129-merged.mount: Deactivated successfully.
Dec 13 07:14:40 compute-0 podman[89948]: 2025-12-13 07:14:40.482999188 +0000 UTC m=+0.776803114 container remove 2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b (image=quay.io/ceph/ceph:v20, name=ecstatic_allen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:14:40 compute-0 systemd[1]: libpod-conmon-2bcd13a6956644982eb6a40fa78c028b319da91aad15a2c2cf23bb0db16a488b.scope: Deactivated successfully.
Dec 13 07:14:40 compute-0 sudo[89945]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:40 compute-0 sudo[90018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbxvvlhfsytzwgpammvbheokcaumegzr ; /usr/bin/python3'
Dec 13 07:14:40 compute-0 sudo[90018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:40 compute-0 python3[90020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:40 compute-0 podman[90021]: 2025-12-13 07:14:40.73252794 +0000 UTC m=+0.027765828 container create d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:14:40 compute-0 systemd[1]: Started libpod-conmon-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope.
Dec 13 07:14:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ae9864bc1038050d6931ffdd98fc340eb28b8f1d60f2a16228d46f7ae568d40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ae9864bc1038050d6931ffdd98fc340eb28b8f1d60f2a16228d46f7ae568d40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:40 compute-0 podman[90021]: 2025-12-13 07:14:40.788349599 +0000 UTC m=+0.083587485 container init d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:40 compute-0 podman[90021]: 2025-12-13 07:14:40.792761941 +0000 UTC m=+0.087999828 container start d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 07:14:40 compute-0 podman[90021]: 2025-12-13 07:14:40.793966414 +0000 UTC m=+0.089204302 container attach d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:40 compute-0 podman[90021]: 2025-12-13 07:14:40.721490034 +0000 UTC m=+0.016727941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 13 07:14:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 07:14:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 13 07:14:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 07:14:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 13 07:14:41 compute-0 stoic_colden[90034]: enabled application 'rbd' on pool 'backups'
Dec 13 07:14:41 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 13 07:14:41 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 4f8dd59e-3051-4e46-8d31-fee7db51a7ff (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 07:14:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:14:41 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:41 compute-0 ceph-mon[74928]: pgmap v45: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:41 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 07:14:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:41 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2829471765' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 07:14:41 compute-0 ceph-mon[74928]: osdmap e25: 3 total, 3 up, 3 in
Dec 13 07:14:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:41 compute-0 systemd[1]: libpod-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope: Deactivated successfully.
Dec 13 07:14:41 compute-0 conmon[90034]: conmon d8f476a2696e752cbfe0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope/container/memory.events
Dec 13 07:14:41 compute-0 podman[90021]: 2025-12-13 07:14:41.466241374 +0000 UTC m=+0.761479261 container died d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 07:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ae9864bc1038050d6931ffdd98fc340eb28b8f1d60f2a16228d46f7ae568d40-merged.mount: Deactivated successfully.
Dec 13 07:14:41 compute-0 podman[90021]: 2025-12-13 07:14:41.484264368 +0000 UTC m=+0.779502255 container remove d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b (image=quay.io/ceph/ceph:v20, name=stoic_colden, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:41 compute-0 sudo[90018]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:41 compute-0 systemd[1]: libpod-conmon-d8f476a2696e752cbfe04867c31133b8c4daba31be20c15fac0e62c2dd264c8b.scope: Deactivated successfully.
Dec 13 07:14:41 compute-0 sudo[90092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkfzjdmsolwvrrpevdmwutsnveoutqdx ; /usr/bin/python3'
Dec 13 07:14:41 compute-0 sudo[90092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=24 pruub=8.740574837s) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active pruub 31.924657822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 24 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=24 pruub=8.740574837s) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown pruub 31.924657822s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 python3[90094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:41 compute-0 podman[90095]: 2025-12-13 07:14:41.726685922 +0000 UTC m=+0.024455886 container create 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:14:41 compute-0 systemd[1]: Started libpod-conmon-9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355.scope.
Dec 13 07:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505eccb8a0be9457a2d2f8158f8aa7fafb8e9a0c968ad7c101b18585e380e4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505eccb8a0be9457a2d2f8158f8aa7fafb8e9a0c968ad7c101b18585e380e4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:41 compute-0 podman[90095]: 2025-12-13 07:14:41.770675412 +0000 UTC m=+0.068445386 container init 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:41 compute-0 podman[90095]: 2025-12-13 07:14:41.774657136 +0000 UTC m=+0.072427099 container start 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:41 compute-0 podman[90095]: 2025-12-13 07:14:41.775689225 +0000 UTC m=+0.073459189 container attach 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1d( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1f( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1e( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1c( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.b( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.a( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.9( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.8( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.6( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.5( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.4( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.3( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.2( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.7( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.c( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.f( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.10( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.11( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.e( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.d( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.12( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.13( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.14( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.15( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.16( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.18( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.19( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1a( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.17( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 25 pg[2.1b( empty local-lis/les=17/18 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:41 compute-0 podman[90095]: 2025-12-13 07:14:41.717135883 +0000 UTC m=+0.014905867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 07:14:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v48: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 13 07:14:42 compute-0 crazy_wu[90108]: enabled application 'rbd' on pool 'images'
Dec 13 07:14:42 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=9.974254608s) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active pruub 39.922641754s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 13 07:14:42 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev ab92441d-cd7c-4669-8597-8e45e7d32a0d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:14:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:42 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=9.974254608s) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown pruub 39.922641754s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1e( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.9( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.8( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.5( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.3( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.6( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.4( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.2( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.7( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.e( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.0( empty local-lis/les=24/26 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.11( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.10( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.12( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.13( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.14( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.16( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.17( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.15( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.19( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.1b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 26 pg[2.18( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=17/17 les/c/f=18/18/0 sis=24) [2] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3539658522' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:42 compute-0 ceph-mon[74928]: osdmap e26: 3 total, 3 up, 3 in
Dec 13 07:14:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:42 compute-0 systemd[1]: libpod-9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355.scope: Deactivated successfully.
Dec 13 07:14:42 compute-0 podman[90095]: 2025-12-13 07:14:42.471034054 +0000 UTC m=+0.768804018 container died 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7505eccb8a0be9457a2d2f8158f8aa7fafb8e9a0c968ad7c101b18585e380e4a-merged.mount: Deactivated successfully.
Dec 13 07:14:42 compute-0 podman[90095]: 2025-12-13 07:14:42.488719133 +0000 UTC m=+0.786489108 container remove 9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355 (image=quay.io/ceph/ceph:v20, name=crazy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:14:42 compute-0 systemd[1]: libpod-conmon-9045833b68e740863271022224ee0fa5d5f6817543a86a56703460fcc48a3355.scope: Deactivated successfully.
Dec 13 07:14:42 compute-0 sudo[90092]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:42 compute-0 sudo[90165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppbqynwyisluefjcxkifspyorffjuzfi ; /usr/bin/python3'
Dec 13 07:14:42 compute-0 sudo[90165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:42 compute-0 python3[90167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:42 compute-0 podman[90168]: 2025-12-13 07:14:42.732591836 +0000 UTC m=+0.025515468 container create bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:14:42 compute-0 systemd[1]: Started libpod-conmon-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope.
Dec 13 07:14:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0ee5b421dc373cd35c3ce81af1aca0cc4147e1fdcf7a9b811466eb24bac979/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0ee5b421dc373cd35c3ce81af1aca0cc4147e1fdcf7a9b811466eb24bac979/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:42 compute-0 podman[90168]: 2025-12-13 07:14:42.785454742 +0000 UTC m=+0.078378374 container init bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:42 compute-0 podman[90168]: 2025-12-13 07:14:42.7899332 +0000 UTC m=+0.082856831 container start bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:42 compute-0 podman[90168]: 2025-12-13 07:14:42.791503451 +0000 UTC m=+0.084427103 container attach bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:14:42 compute-0 podman[90168]: 2025-12-13 07:14:42.722233287 +0000 UTC m=+0.015156939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 13 07:14:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 07:14:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 13 07:14:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 07:14:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 13 07:14:43 compute-0 youthful_varahamihira[90180]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 13 07:14:43 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 26 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26 pruub=15.961985588s) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active pruub 43.764850616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.7( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.6( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.4( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.19( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.3( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.2( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.10( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.11( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.12( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.14( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.16( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.17( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26 pruub=15.961985588s) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown pruub 43.764850616s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.13( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.11( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.15( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.14( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.17( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.16( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.18( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.19( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1a( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1b( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1d( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1c( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1e( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1f( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.12( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.2( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.4( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.3( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.6( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.5( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.7( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.8( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.a( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.c( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.b( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.e( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.d( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.f( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.10( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.1( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.6( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 01779521-6cb6-4003-b6fb-45475e592b47 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 13 07:14:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:14:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:43 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 27 pg[3.9( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.19( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.3( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=26/27 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.16( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.17( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 27 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:43 compute-0 ceph-mon[74928]: pgmap v48: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:43 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 07:14:43 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:43 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/904926521' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 07:14:43 compute-0 ceph-mon[74928]: osdmap e27: 3 total, 3 up, 3 in
Dec 13 07:14:43 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:14:43 compute-0 systemd[1]: libpod-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope: Deactivated successfully.
Dec 13 07:14:43 compute-0 conmon[90180]: conmon bcef0b9f440e271a77a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope/container/memory.events
Dec 13 07:14:43 compute-0 podman[90168]: 2025-12-13 07:14:43.47654756 +0000 UTC m=+0.769471193 container died bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0ee5b421dc373cd35c3ce81af1aca0cc4147e1fdcf7a9b811466eb24bac979-merged.mount: Deactivated successfully.
Dec 13 07:14:43 compute-0 podman[90168]: 2025-12-13 07:14:43.494130567 +0000 UTC m=+0.787054200 container remove bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78 (image=quay.io/ceph/ceph:v20, name=youthful_varahamihira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:43 compute-0 systemd[1]: libpod-conmon-bcef0b9f440e271a77a271c60899bbe9642badd2701942923565f2064e93ad78.scope: Deactivated successfully.
Dec 13 07:14:43 compute-0 sudo[90165]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:43 compute-0 sudo[90239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udtwfadqxphlqabnpcwodahzcvdiwlth ; /usr/bin/python3'
Dec 13 07:14:43 compute-0 sudo[90239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:43 compute-0 python3[90241]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:43 compute-0 podman[90242]: 2025-12-13 07:14:43.751290974 +0000 UTC m=+0.030271677 container create eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:14:43 compute-0 systemd[1]: Started libpod-conmon-eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba.scope.
Dec 13 07:14:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2e0cc8d17b37b4fa0b5bc1f8ac57745ccf56f35bb669ae0830d4227eabccd0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2e0cc8d17b37b4fa0b5bc1f8ac57745ccf56f35bb669ae0830d4227eabccd0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:43 compute-0 podman[90242]: 2025-12-13 07:14:43.809099175 +0000 UTC m=+0.088079898 container init eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:43 compute-0 podman[90242]: 2025-12-13 07:14:43.812786916 +0000 UTC m=+0.091767620 container start eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:14:43 compute-0 podman[90242]: 2025-12-13 07:14:43.814112438 +0000 UTC m=+0.093093140 container attach eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:43 compute-0 podman[90242]: 2025-12-13 07:14:43.73824291 +0000 UTC m=+0.017223633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Dec 13 07:14:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v51: 100 pgs: 38 active+clean, 62 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:44 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 13 07:14:44 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 13 07:14:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 13 07:14:44 compute-0 fervent_elgamal[90254]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 13 07:14:44 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 0c63b812-d8ea-45f7-a85d-0ba9cc684e1f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 3d02a20f-ab72-4b9e-89db-75b6b986c2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 3d02a20f-ab72-4b9e-89db-75b6b986c2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev b94c178d-fa47-4df1-8a52-3fbb7d542575 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event b94c178d-fa47-4df1-8a52-3fbb7d542575 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 4f8dd59e-3051-4e46-8d31-fee7db51a7ff (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 4f8dd59e-3051-4e46-8d31-fee7db51a7ff (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev ab92441d-cd7c-4669-8597-8e45e7d32a0d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event ab92441d-cd7c-4669-8597-8e45e7d32a0d (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.4( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=26/28 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.2( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 28 pg[3.19( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [1] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 01779521-6cb6-4003-b6fb-45475e592b47 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 01779521-6cb6-4003-b6fb-45475e592b47 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 0c63b812-d8ea-45f7-a85d-0ba9cc684e1f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 07:14:44 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 0c63b812-d8ea-45f7-a85d-0ba9cc684e1f (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1328049796' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:44 compute-0 ceph-mon[74928]: osdmap e28: 3 total, 3 up, 3 in
Dec 13 07:14:44 compute-0 systemd[1]: libpod-eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba.scope: Deactivated successfully.
Dec 13 07:14:44 compute-0 podman[90242]: 2025-12-13 07:14:44.474551451 +0000 UTC m=+0.753532164 container died eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2e0cc8d17b37b4fa0b5bc1f8ac57745ccf56f35bb669ae0830d4227eabccd0-merged.mount: Deactivated successfully.
Dec 13 07:14:44 compute-0 podman[90242]: 2025-12-13 07:14:44.494830807 +0000 UTC m=+0.773811510 container remove eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba (image=quay.io/ceph/ceph:v20, name=fervent_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:44 compute-0 systemd[1]: libpod-conmon-eefeecc58ad7e1dc6d6366319dd9a3e25b752cbd112288a67c2131adc9fdf5ba.scope: Deactivated successfully.
Dec 13 07:14:44 compute-0 sudo[90239]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=9.272872925s) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 41.943721771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=9.272872925s) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown pruub 41.943721771s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 28 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.253122330s) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.938323975s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 28 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=8.253122330s) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown pruub 34.938323975s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 python3[90363]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:14:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 13 07:14:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 13 07:14:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 13 07:14:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1c( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1f( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.10( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.17( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.8( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.a( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.b( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.6( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.e( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.d( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1b( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=20/21 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.10( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.11( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.14( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.12( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.15( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.8( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.17( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.b( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.16( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.13( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.7( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.6( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.5( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.0( empty local-lis/les=28/29 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.2( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.4( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.3( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.9( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.1b( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.18( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 29 pg[5.19( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [2] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1a( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.15( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.14( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.17( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.16( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.11( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.10( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.13( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.12( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.c( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.d( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.f( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.e( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.2( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.3( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1b( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.6( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.b( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.18( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.7( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.19( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.4( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.8( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.9( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.5( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.a( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1e( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1f( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1c( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1d( empty local-lis/les=21/22 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:45 compute-0 ceph-mon[74928]: pgmap v51: 100 pgs: 38 active+clean, 62 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:45 compute-0 ceph-mon[74928]: 2.1f scrub starts
Dec 13 07:14:45 compute-0 ceph-mon[74928]: 2.1f scrub ok
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1a( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.16( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.14( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.15( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.17( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.10( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.11( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.13( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.12( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.0( empty local-lis/les=28/29 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.2( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.3( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.6( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.18( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.4( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.9( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.5( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.a( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.1d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.19( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.8( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 29 pg[6.7( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:45 compute-0 python3[90434]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610085.1056552-36981-271292940554060/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:14:45 compute-0 sudo[90534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plglgjdfhrlhtopondsbgstxqjyiobba ; /usr/bin/python3'
Dec 13 07:14:45 compute-0 sudo[90534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:46 compute-0 python3[90536]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:14:46 compute-0 sudo[90534]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:46 compute-0 sudo[90609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsapzysqddkbmyekhoatyduqyykreiw ; /usr/bin/python3'
Dec 13 07:14:46 compute-0 sudo[90609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v54: 162 pgs: 69 active+clean, 93 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:14:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:46 compute-0 python3[90611]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610085.8035133-36995-263164690150905/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e0c31109065f9377d9a1ac1458da111ccd8d5eb7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:14:46 compute-0 sudo[90609]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:46 compute-0 sudo[90659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gktjbdfmskbtflditfpjoccazrshzpga ; /usr/bin/python3'
Dec 13 07:14:46 compute-0 sudo[90659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 13 07:14:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 13 07:14:46 compute-0 ceph-mon[74928]: 2.b scrub starts
Dec 13 07:14:46 compute-0 ceph-mon[74928]: 2.b scrub ok
Dec 13 07:14:46 compute-0 ceph-mon[74928]: osdmap e29: 3 total, 3 up, 3 in
Dec 13 07:14:46 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:14:46 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 13 07:14:46 compute-0 python3[90661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:46 compute-0 podman[90662]: 2025-12-13 07:14:46.604424887 +0000 UTC m=+0.029620023 container create ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:46 compute-0 systemd[1]: Started libpod-conmon-ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0.scope.
Dec 13 07:14:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:46 compute-0 podman[90662]: 2025-12-13 07:14:46.657672006 +0000 UTC m=+0.082867142 container init ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:46 compute-0 podman[90662]: 2025-12-13 07:14:46.661693724 +0000 UTC m=+0.086888851 container start ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:46 compute-0 podman[90662]: 2025-12-13 07:14:46.662644191 +0000 UTC m=+0.087839327 container attach ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 07:14:46 compute-0 podman[90662]: 2025-12-13 07:14:46.592293486 +0000 UTC m=+0.017488642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 07:14:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 07:14:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 07:14:46 compute-0 frosty_boyd[90674]: 
Dec 13 07:14:46 compute-0 frosty_boyd[90674]: [global]
Dec 13 07:14:46 compute-0 frosty_boyd[90674]:         fsid = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:14:46 compute-0 frosty_boyd[90674]:         mon_host = 192.168.122.100
Dec 13 07:14:46 compute-0 frosty_boyd[90674]:         rgw_keystone_api_version = 3
Dec 13 07:14:46 compute-0 systemd[1]: libpod-ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0.scope: Deactivated successfully.
Dec 13 07:14:47 compute-0 podman[90706]: 2025-12-13 07:14:47.018935836 +0000 UTC m=+0.015389596 container died ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:14:47 compute-0 sudo[90699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:47 compute-0 sudo[90699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:47 compute-0 sudo[90699]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c0ef990b6dbff8a861238c9762a046eb4f42a6c963d19b8ea3e5f53798184a-merged.mount: Deactivated successfully.
Dec 13 07:14:47 compute-0 podman[90706]: 2025-12-13 07:14:47.041107428 +0000 UTC m=+0.037561178 container remove ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0 (image=quay.io/ceph/ceph:v20, name=frosty_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:47 compute-0 systemd[1]: libpod-conmon-ece869f1b61462d684212cd9ae598e74f922f244b46082ce46f6fb72bda832b0.scope: Deactivated successfully.
Dec 13 07:14:47 compute-0 sudo[90659]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:47 compute-0 sudo[90736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:14:47 compute-0 sudo[90736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:47 compute-0 sudo[90784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbllkfwcnthplwmtvtolqbmhsxfovjn ; /usr/bin/python3'
Dec 13 07:14:47 compute-0 sudo[90784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:47 compute-0 python3[90786]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.324136845 +0000 UTC m=+0.029271578 container create 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:14:47 compute-0 systemd[1]: Started libpod-conmon-48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899.scope.
Dec 13 07:14:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.378466398 +0000 UTC m=+0.083601142 container init 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.382720364 +0000 UTC m=+0.087855087 container start 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.383858452 +0000 UTC m=+0.088993175 container attach 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.311908341 +0000 UTC m=+0.017043085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:47 compute-0 podman[90837]: 2025-12-13 07:14:47.41936622 +0000 UTC m=+0.035743482 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: pgmap v54: 162 pgs: 69 active+clean, 93 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:14:47 compute-0 ceph-mon[74928]: osdmap e30: 3 total, 3 up, 3 in
Dec 13 07:14:47 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 07:14:47 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2924988186' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 07:14:47 compute-0 podman[90837]: 2025-12-13 07:14:47.499315395 +0000 UTC m=+0.115692658 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=30 pruub=15.852033615s) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active pruub 47.802551270s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=30 pruub=15.852033615s) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown pruub 47.802551270s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2200430480' entity='client.admin' 
Dec 13 07:14:47 compute-0 reverent_murdock[90831]: set ssl_option
Dec 13 07:14:47 compute-0 systemd[1]: libpod-48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899.scope: Deactivated successfully.
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.818447439 +0000 UTC m=+0.523582161 container died 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fa00fdd55525d739f4830194831740be7293c830650cb0b38a15f450ea3bd62-merged.mount: Deactivated successfully.
Dec 13 07:14:47 compute-0 podman[90803]: 2025-12-13 07:14:47.842770384 +0000 UTC m=+0.547905107 container remove 48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899 (image=quay.io/ceph/ceph:v20, name=reverent_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 07:14:47 compute-0 systemd[1]: libpod-conmon-48e496cd13c1932f6fa7c80eb02e94e322d77c73315376fb66831d33fb076899.scope: Deactivated successfully.
Dec 13 07:14:47 compute-0 sudo[90784]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:47 compute-0 sudo[90736]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:47 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:47 compute-0 sudo[90991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:47 compute-0 sudo[90991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:47 compute-0 sudo[90991]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:47 compute-0 sudo[91038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khhfikoepwfsedlgjpyruhifxaewuqwo ; /usr/bin/python3'
Dec 13 07:14:47 compute-0 sudo[91038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:48 compute-0 sudo[91041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:14:48 compute-0 sudo[91041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 13 07:14:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 13 07:14:48 compute-0 python3[91042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.125484388 +0000 UTC m=+0.028093502 container create 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:14:48 compute-0 systemd[1]: Started libpod-conmon-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope.
Dec 13 07:14:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.172488082 +0000 UTC m=+0.075097216 container init 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.176942765 +0000 UTC m=+0.079551879 container start 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.178763116 +0000 UTC m=+0.081372230 container attach 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v56: 193 pgs: 131 active+clean, 62 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.114565847 +0000 UTC m=+0.017174981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.231039431 +0000 UTC m=+0.027090788 container create 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:14:48 compute-0 systemd[1]: Started libpod-conmon-436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133.scope.
Dec 13 07:14:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.280390697 +0000 UTC m=+0.076442075 container init 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.285643439 +0000 UTC m=+0.081694798 container start 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.286824248 +0000 UTC m=+0.082875607 container attach 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 07:14:48 compute-0 silly_faraday[91108]: 167 167
Dec 13 07:14:48 compute-0 systemd[1]: libpod-436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133.scope: Deactivated successfully.
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.289173083 +0000 UTC m=+0.085224441 container died 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-43b94e35ede8accac8edf75c9632a89dff68031c0c0f606fc33afb9b60cb71c5-merged.mount: Deactivated successfully.
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.307412896 +0000 UTC m=+0.103464253 container remove 436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:48 compute-0 podman[91094]: 2025-12-13 07:14:48.220620828 +0000 UTC m=+0.016672206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:48 compute-0 systemd[1]: libpod-conmon-436ae0f5ebebca3aef22d8a5b068d1b1fdb6c54fae6267e67b8f229961493133.scope: Deactivated successfully.
Dec 13 07:14:48 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.422155375 +0000 UTC m=+0.027966274 container create 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 07:14:48 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 13 07:14:48 compute-0 systemd[1]: Started libpod-conmon-92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292.scope.
Dec 13 07:14:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.475662172 +0000 UTC m=+0.081473081 container init 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.481193628 +0000 UTC m=+0.087004527 container start 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.482628065 +0000 UTC m=+0.088438964 container attach 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 07:14:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 13 07:14:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 13 07:14:48 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1e( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1c( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.13( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.12( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.17( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.10( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.16( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.15( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.b( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.9( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.8( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.f( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.6( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.4( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.5( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.7( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.2( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.3( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.11( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.c( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.d( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.e( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1f( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.18( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1a( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1b( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.19( empty local-lis/les=22/23 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.13( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.12( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.17( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.10( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.16( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.15( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.9( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.8( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.6( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.4( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.0( empty local-lis/les=30/31 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.5( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.7( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.3( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.11( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.d( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.18( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.2( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.1b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 31 pg[7.19( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=22/22 les/c/f=23/23/0 sis=30) [1] r=0 lpr=30 pi=[22,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.409986293 +0000 UTC m=+0.015797212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:48 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:48 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 13 07:14:48 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 07:14:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 07:14:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:48 compute-0 nifty_dubinsky[91079]: Scheduled rgw.rgw update...
Dec 13 07:14:48 compute-0 systemd[1]: libpod-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope: Deactivated successfully.
Dec 13 07:14:48 compute-0 conmon[91079]: conmon 1043b00bf01b041ecfb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope/container/memory.events
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.545295629 +0000 UTC m=+0.447904743 container died 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:48 compute-0 podman[91067]: 2025-12-13 07:14:48.564091475 +0000 UTC m=+0.466700590 container remove 1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96 (image=quay.io/ceph/ceph:v20, name=nifty_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:14:48 compute-0 systemd[1]: libpod-conmon-1043b00bf01b041ecfb320e84e792c128d857b61dbd3a2377cb53152f7ca2b96.scope: Deactivated successfully.
Dec 13 07:14:48 compute-0 sudo[91038]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2200430480' entity='client.admin' 
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:48 compute-0 ceph-mon[74928]: 3.1f scrub starts
Dec 13 07:14:48 compute-0 ceph-mon[74928]: 3.1f scrub ok
Dec 13 07:14:48 compute-0 ceph-mon[74928]: pgmap v56: 193 pgs: 131 active+clean, 62 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:48 compute-0 ceph-mon[74928]: 4.1d scrub starts
Dec 13 07:14:48 compute-0 ceph-mon[74928]: 4.1d scrub ok
Dec 13 07:14:48 compute-0 ceph-mon[74928]: osdmap e31: 3 total, 3 up, 3 in
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:48 compute-0 ceph-mon[74928]: Saving service rgw.rgw spec with placement compute-0
Dec 13 07:14:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7605ecef08d9aee831369bdda9ffcf1dfbad48530a90cd5fdce5b9c19f64bb91-merged.mount: Deactivated successfully.
Dec 13 07:14:48 compute-0 kind_tharp[91160]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:14:48 compute-0 kind_tharp[91160]: --> All data devices are unavailable
Dec 13 07:14:48 compute-0 systemd[1]: libpod-92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292.scope: Deactivated successfully.
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.853506165 +0000 UTC m=+0.459317074 container died 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ca1625d6ee2c6ee98f4e3fdc88b4fc3b61b955aab46c6e6888c98065b972363-merged.mount: Deactivated successfully.
Dec 13 07:14:48 compute-0 podman[91147]: 2025-12-13 07:14:48.874019359 +0000 UTC m=+0.479830258 container remove 92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tharp, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:14:48 compute-0 systemd[1]: libpod-conmon-92fcfb75aed971f5b1d18a9713090730c23f1365c6e88f29c41ecfb43f996292.scope: Deactivated successfully.
Dec 13 07:14:48 compute-0 sudo[91041]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:48 compute-0 sudo[91202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:48 compute-0 sudo[91202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:48 compute-0 sudo[91202]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:48 compute-0 sudo[91227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:14:48 compute-0 sudo[91227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:49 compute-0 ceph-mgr[75200]: [progress INFO root] Writing back 9 completed events
Dec 13 07:14:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 07:14:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.195713678 +0000 UTC m=+0.026068337 container create 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 07:14:49 compute-0 systemd[1]: Started libpod-conmon-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope.
Dec 13 07:14:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.248012975 +0000 UTC m=+0.078367654 container init 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.253019475 +0000 UTC m=+0.083374134 container start 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.254176129 +0000 UTC m=+0.084530788 container attach 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:49 compute-0 blissful_galileo[91350]: 167 167
Dec 13 07:14:49 compute-0 systemd[1]: libpod-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope: Deactivated successfully.
Dec 13 07:14:49 compute-0 conmon[91350]: conmon 184f20331906ca178362 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope/container/memory.events
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.256759935 +0000 UTC m=+0.087114594 container died 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab6caba8a9af4fd5e6fd49115bc1bec5761e87e0662bc90cf7ead86ecda3428-merged.mount: Deactivated successfully.
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.281213385 +0000 UTC m=+0.111568045 container remove 184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:49 compute-0 podman[91318]: 2025-12-13 07:14:49.184976337 +0000 UTC m=+0.015331016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:49 compute-0 systemd[1]: libpod-conmon-184f20331906ca1783627b4319e7e8e4ec9fe5624dbde72ac98541589748f680.scope: Deactivated successfully.
Dec 13 07:14:49 compute-0 python3[91347]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.393938284 +0000 UTC m=+0.027188983 container create a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:49 compute-0 systemd[1]: Started libpod-conmon-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope.
Dec 13 07:14:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:49 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.443560269 +0000 UTC m=+0.076810978 container init a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:49 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.448687777 +0000 UTC m=+0.081938475 container start a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.449845282 +0000 UTC m=+0.083096001 container attach a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.383244325 +0000 UTC m=+0.016495044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:49 compute-0 python3[91458]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610089.0970848-37036-173679259064353/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:14:49 compute-0 modest_goldstine[91438]: {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:     "0": [
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:         {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "devices": [
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "/dev/loop3"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             ],
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_name": "ceph_lv0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_size": "21470642176",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "name": "ceph_lv0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "tags": {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.crush_device_class": "",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.encrypted": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osd_id": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.type": "block",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.vdo": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.with_tpm": "0"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             },
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "type": "block",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "vg_name": "ceph_vg0"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:         }
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:     ],
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:     "1": [
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:         {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "devices": [
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "/dev/loop4"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             ],
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_name": "ceph_lv1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_size": "21470642176",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "name": "ceph_lv1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "tags": {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.crush_device_class": "",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.encrypted": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osd_id": "1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.type": "block",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.vdo": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.with_tpm": "0"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             },
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "type": "block",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "vg_name": "ceph_vg1"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:         }
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:     ],
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:     "2": [
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:         {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "devices": [
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "/dev/loop5"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             ],
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_name": "ceph_lv2",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_size": "21470642176",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "name": "ceph_lv2",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "tags": {
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.crush_device_class": "",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.encrypted": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osd_id": "2",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.type": "block",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.vdo": "0",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:                 "ceph.with_tpm": "0"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             },
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "type": "block",
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:             "vg_name": "ceph_vg2"
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:         }
Dec 13 07:14:49 compute-0 modest_goldstine[91438]:     ]
Dec 13 07:14:49 compute-0 modest_goldstine[91438]: }
Dec 13 07:14:49 compute-0 systemd[1]: libpod-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope: Deactivated successfully.
Dec 13 07:14:49 compute-0 conmon[91438]: conmon a7528cbca6354316f4e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope/container/memory.events
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.684587152 +0000 UTC m=+0.317837851 container died a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:49 compute-0 podman[91395]: 2025-12-13 07:14:49.706445385 +0000 UTC m=+0.339696084 container remove a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:14:49 compute-0 systemd[1]: libpod-conmon-a7528cbca6354316f4e5ed3afc99e5cc24320406022c5b2fdc47cd992feb09a2.scope: Deactivated successfully.
Dec 13 07:14:49 compute-0 sudo[91227]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:49 compute-0 sudo[91499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:49 compute-0 sudo[91499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:49 compute-0 sudo[91499]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:49 compute-0 sudo[91524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:14:49 compute-0 sudo[91524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:49 compute-0 sudo[91570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etqtcyvwttuqggqoqtfdfknaxwasxcbb ; /usr/bin/python3'
Dec 13 07:14:49 compute-0 sudo[91570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-da0dbe1ffe5bff55074ff138bb3a603334c9a6ced2cc89f9310552e1d8f91ccf-merged.mount: Deactivated successfully.
Dec 13 07:14:49 compute-0 python3[91574]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:49 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 13 07:14:49 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 13 07:14:49 compute-0 podman[91575]: 2025-12-13 07:14:49.975600776 +0000 UTC m=+0.029502571 container create 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:14:50 compute-0 systemd[1]: Started libpod-conmon-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope.
Dec 13 07:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 podman[91575]: 2025-12-13 07:14:50.035534883 +0000 UTC m=+0.089436698 container init 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:14:50 compute-0 podman[91575]: 2025-12-13 07:14:50.040838962 +0000 UTC m=+0.094740757 container start 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:50 compute-0 podman[91575]: 2025-12-13 07:14:50.042168701 +0000 UTC m=+0.096070496 container attach 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.043944028 +0000 UTC m=+0.030224788 container create 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:14:50 compute-0 podman[91575]: 2025-12-13 07:14:49.964137841 +0000 UTC m=+0.018039646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:50 compute-0 systemd[1]: Started libpod-conmon-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope.
Dec 13 07:14:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:50 compute-0 ceph-mon[74928]: 4.1e scrub starts
Dec 13 07:14:50 compute-0 ceph-mon[74928]: 4.1e scrub ok
Dec 13 07:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.095214842 +0000 UTC m=+0.081495592 container init 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.099913473 +0000 UTC m=+0.086194224 container start 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.101035272 +0000 UTC m=+0.087316021 container attach 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:50 compute-0 elastic_solomon[91615]: 167 167
Dec 13 07:14:50 compute-0 systemd[1]: libpod-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope: Deactivated successfully.
Dec 13 07:14:50 compute-0 conmon[91615]: conmon 12e5017ca8327a586caa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope/container/memory.events
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.102861655 +0000 UTC m=+0.089142405 container died 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c3382e429be7f9b0ce32109956b2b492d7a78905d28b6aa4f76d201dbc3a013-merged.mount: Deactivated successfully.
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.125665434 +0000 UTC m=+0.111946185 container remove 12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:50 compute-0 podman[91599]: 2025-12-13 07:14:50.033888458 +0000 UTC m=+0.020169229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:50 compute-0 systemd[1]: libpod-conmon-12e5017ca8327a586caa02e46db69069477ba24dba9de2e95b9d84d727c787e1.scope: Deactivated successfully.
Dec 13 07:14:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v58: 193 pgs: 162 active+clean, 31 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:50 compute-0 podman[91655]: 2025-12-13 07:14:50.24309245 +0000 UTC m=+0.029021506 container create b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:50 compute-0 systemd[1]: Started libpod-conmon-b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a.scope.
Dec 13 07:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 podman[91655]: 2025-12-13 07:14:50.303037036 +0000 UTC m=+0.088966112 container init b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:50 compute-0 podman[91655]: 2025-12-13 07:14:50.307542785 +0000 UTC m=+0.093471841 container start b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:50 compute-0 podman[91655]: 2025-12-13 07:14:50.308824515 +0000 UTC m=+0.094753591 container attach b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:14:50 compute-0 podman[91655]: 2025-12-13 07:14:50.230937314 +0000 UTC m=+0.016866390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:50 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:50 compute-0 ceph-mgr[75200]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 07:14:50 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0[74924]: 2025-12-13T07:14:50.409+0000 7fa7b36e9640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e2 new map
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-13T07:14:50:410071+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T07:14:50.409831+0000
                                           modified        2025-12-13T07:14:50.409831+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 13 07:14:50 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 07:14:50 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 07:14:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 07:14:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:50 compute-0 ceph-mgr[75200]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 07:14:50 compute-0 systemd[1]: libpod-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope: Deactivated successfully.
Dec 13 07:14:50 compute-0 conmon[91597]: conmon 7ea71ea4cd112f3bfdff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope/container/memory.events
Dec 13 07:14:50 compute-0 podman[91575]: 2025-12-13 07:14:50.440945327 +0000 UTC m=+0.494847132 container died 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:50 compute-0 podman[91575]: 2025-12-13 07:14:50.459563791 +0000 UTC m=+0.513465586 container remove 7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564 (image=quay.io/ceph/ceph:v20, name=stoic_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:50 compute-0 systemd[1]: libpod-conmon-7ea71ea4cd112f3bfdff2517b04d88dba8d9b5daa0e449b771816d69277bb564.scope: Deactivated successfully.
Dec 13 07:14:50 compute-0 sudo[91570]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:50 compute-0 sudo[91724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjmvhdqpgriinvkfbixayrawiwxtiglj ; /usr/bin/python3'
Dec 13 07:14:50 compute-0 sudo[91724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:50 compute-0 python3[91731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:50 compute-0 podman[91765]: 2025-12-13 07:14:50.763348138 +0000 UTC m=+0.036958003 container create 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:14:50 compute-0 systemd[1]: Started libpod-conmon-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope.
Dec 13 07:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:50 compute-0 podman[91765]: 2025-12-13 07:14:50.830660762 +0000 UTC m=+0.104270628 container init 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 07:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d748c88a04e6ca1e5b943c9204d535237a8b0613bd1364e789b052b7b828d0d0-merged.mount: Deactivated successfully.
Dec 13 07:14:50 compute-0 podman[91765]: 2025-12-13 07:14:50.838803758 +0000 UTC m=+0.112413623 container start 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:50 compute-0 podman[91765]: 2025-12-13 07:14:50.840359221 +0000 UTC m=+0.113969106 container attach 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:14:50 compute-0 podman[91765]: 2025-12-13 07:14:50.750666972 +0000 UTC m=+0.024276859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:50 compute-0 lvm[91796]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:50 compute-0 lvm[91796]: VG ceph_vg0 finished
Dec 13 07:14:50 compute-0 lvm[91799]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:50 compute-0 lvm[91799]: VG ceph_vg1 finished
Dec 13 07:14:50 compute-0 lvm[91802]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:50 compute-0 lvm[91802]: VG ceph_vg2 finished
Dec 13 07:14:50 compute-0 pensive_raman[91668]: {}
Dec 13 07:14:50 compute-0 lvm[91805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:50 compute-0 lvm[91805]: VG ceph_vg0 finished
Dec 13 07:14:50 compute-0 systemd[1]: libpod-b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a.scope: Deactivated successfully.
Dec 13 07:14:50 compute-0 podman[91825]: 2025-12-13 07:14:50.966722692 +0000 UTC m=+0.017414682 container died b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-69c5d4e7a4357fc103490dca075f8df7d7202fb26e9daf84f876e25b78a91b6a-merged.mount: Deactivated successfully.
Dec 13 07:14:50 compute-0 podman[91825]: 2025-12-13 07:14:50.991277353 +0000 UTC m=+0.041969322 container remove b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:14:50 compute-0 systemd[1]: libpod-conmon-b69e737569610004c428347db011ac4714335146a9756b8a895d926757cdb87a.scope: Deactivated successfully.
Dec 13 07:14:51 compute-0 sudo[91524]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:51 compute-0 sudo[91836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:14:51 compute-0 sudo[91836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:51 compute-0 sudo[91836]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:51 compute-0 ceph-mon[74928]: 3.1d scrub starts
Dec 13 07:14:51 compute-0 ceph-mon[74928]: 3.1d scrub ok
Dec 13 07:14:51 compute-0 ceph-mon[74928]: pgmap v58: 193 pgs: 162 active+clean, 31 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 07:14:51 compute-0 ceph-mon[74928]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 07:14:51 compute-0 ceph-mon[74928]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 07:14:51 compute-0 ceph-mon[74928]: osdmap e32: 3 total, 3 up, 3 in
Dec 13 07:14:51 compute-0 ceph-mon[74928]: fsmap cephfs:0
Dec 13 07:14:51 compute-0 ceph-mon[74928]: Saving service mds.cephfs spec with placement compute-0
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:51 compute-0 sudo[91861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:51 compute-0 sudo[91861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:51 compute-0 sudo[91861]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:51 compute-0 sudo[91886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:14:51 compute-0 sudo[91886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:51 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:51 compute-0 ceph-mgr[75200]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 07:14:51 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 07:14:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 07:14:51 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:51 compute-0 nostalgic_diffie[91787]: Scheduled mds.cephfs update...
Dec 13 07:14:51 compute-0 systemd[1]: libpod-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope: Deactivated successfully.
Dec 13 07:14:51 compute-0 conmon[91787]: conmon 39827d121e52975f63fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope/container/memory.events
Dec 13 07:14:51 compute-0 podman[91765]: 2025-12-13 07:14:51.204138609 +0000 UTC m=+0.477748475 container died 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc2677f5f39c827fcce2f85f380458faabd7218088062441ff81f1bb50525e4-merged.mount: Deactivated successfully.
Dec 13 07:14:51 compute-0 podman[91765]: 2025-12-13 07:14:51.225411181 +0000 UTC m=+0.499021048 container remove 39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:14:51 compute-0 systemd[1]: libpod-conmon-39827d121e52975f63fbcbb9c99fee8bcaead78cc6af8f45014b8b77e88865a1.scope: Deactivated successfully.
Dec 13 07:14:51 compute-0 sudo[91724]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:51 compute-0 podman[91957]: 2025-12-13 07:14:51.490608964 +0000 UTC m=+0.038604690 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:51 compute-0 sudo[92056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnunygredjcjgnmqfbvasohkexpoarqy ; /usr/bin/python3'
Dec 13 07:14:51 compute-0 sudo[92056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:51 compute-0 podman[92025]: 2025-12-13 07:14:51.625632151 +0000 UTC m=+0.048108230 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:14:51 compute-0 podman[91957]: 2025-12-13 07:14:51.629411115 +0000 UTC m=+0.177406840 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:51 compute-0 python3[92059]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 07:14:51 compute-0 sudo[92056]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:51 compute-0 sudo[92199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyqagglrdjvqvazhzyekewdjgyikksht ; /usr/bin/python3'
Dec 13 07:14:51 compute-0 sudo[92199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:52 compute-0 python3[92203]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610091.5224814-37066-154860606618398/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=89bb88aee4825eacb5f29faabebd795dc909bcd4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:14:52 compute-0 sudo[91886]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:52 compute-0 sudo[92199]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:52 compute-0 sudo[92232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:52 compute-0 sudo[92232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:52 compute-0 sudo[92232]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:52 compute-0 sudo[92281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:14:52 compute-0 sudo[92281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: Saving service mds.cephfs spec with placement compute-0
Dec 13 07:14:52 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:52 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:52 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v60: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:52 compute-0 sudo[92329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thjwwidliwxqfsnvznoszoptmcquvtpo ; /usr/bin/python3'
Dec 13 07:14:52 compute-0 sudo[92329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 13 07:14:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 13 07:14:52 compute-0 python3[92333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.431501494 +0000 UTC m=+0.029999895 container create 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:14:52 compute-0 systemd[1]: Started libpod-conmon-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope.
Dec 13 07:14:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6251b333b72a3e7080263b4581892cb66666b9bd4d648f45326703b2e0fa703/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6251b333b72a3e7080263b4581892cb66666b9bd4d648f45326703b2e0fa703/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.484499314 +0000 UTC m=+0.082997736 container init 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.495491494 +0000 UTC m=+0.093989906 container start 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.50088447 +0000 UTC m=+0.099382873 container attach 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:14:52 compute-0 sudo[92281]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.419810721 +0000 UTC m=+0.018309143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:52 compute-0 sudo[92378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:52 compute-0 sudo[92378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:52 compute-0 sudo[92378]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:52 compute-0 sudo[92420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:14:52 compute-0 sudo[92420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.848247924 +0000 UTC m=+0.027689043 container create 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:14:52 compute-0 systemd[1]: Started libpod-conmon-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope.
Dec 13 07:14:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 07:14:52 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 07:14:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.896562853 +0000 UTC m=+0.076003982 container init 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.900611723 +0000 UTC m=+0.080052852 container start 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.902487808 +0000 UTC m=+0.081928947 container attach 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:14:52 compute-0 systemd[1]: libpod-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope: Deactivated successfully.
Dec 13 07:14:52 compute-0 fervent_shamir[92470]: 167 167
Dec 13 07:14:52 compute-0 systemd[1]: libpod-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope: Deactivated successfully.
Dec 13 07:14:52 compute-0 conmon[92362]: conmon 432383f4ffddb21de4db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope/container/memory.events
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.905685529 +0000 UTC m=+0.504183930 container died 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:52 compute-0 conmon[92470]: conmon 8e43e84ceaedfe5e05e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope/container/memory.events
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.907672844 +0000 UTC m=+0.087113973 container died 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-97bf723da73f1ffe2703cf97315ade64e934dd5a5564e3081d6e31bc50d72150-merged.mount: Deactivated successfully.
Dec 13 07:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6251b333b72a3e7080263b4581892cb66666b9bd4d648f45326703b2e0fa703-merged.mount: Deactivated successfully.
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.933475531 +0000 UTC m=+0.112916660 container remove 8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:14:52 compute-0 podman[92457]: 2025-12-13 07:14:52.836275582 +0000 UTC m=+0.015716731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:52 compute-0 podman[92347]: 2025-12-13 07:14:52.938603458 +0000 UTC m=+0.537101860 container remove 432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097 (image=quay.io/ceph/ceph:v20, name=cranky_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:52 compute-0 systemd[1]: libpod-conmon-432383f4ffddb21de4db6d50bb1d33c5215904a4f7aaa232d6c385f1ce96d097.scope: Deactivated successfully.
Dec 13 07:14:52 compute-0 systemd[1]: libpod-conmon-8e43e84ceaedfe5e05e5123c5dca19efdfadb1c0b09a65e0d94f5f233d6251d0.scope: Deactivated successfully.
Dec 13 07:14:52 compute-0 sudo[92329]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.052908607 +0000 UTC m=+0.028296425 container create e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:53 compute-0 systemd[1]: Started libpod-conmon-e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096.scope.
Dec 13 07:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.120679633 +0000 UTC m=+0.096067441 container init e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.126050167 +0000 UTC m=+0.101437985 container start e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.127313351 +0000 UTC m=+0.102701159 container attach e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.041045169 +0000 UTC m=+0.016432997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271790504s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968959808s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288320541s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985500336s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271747589s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968959808s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288269043s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985500336s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287899971s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985153198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287873268s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985153198s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271297455s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968765259s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271382332s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968856812s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271284103s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968765259s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271527290s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.969051361s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271353722s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968856812s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271512985s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.969051361s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271117210s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968692780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288145065s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985740662s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288131714s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985740662s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271201134s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968852997s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271050453s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968692780s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271189690s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968852997s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288107872s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985786438s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288093567s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985786438s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288252831s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985961914s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.1e( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.19( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.18( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.1a( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.19( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.7( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.5( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.2( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288240433s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985961914s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270836830s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968608856s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287993431s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985763550s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.1f( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.2( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.3( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.8( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.16( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.1d( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.c( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.f( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.9( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.6( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.1( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.7( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.4( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.15( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.3( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.9( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.16( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.12( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.13( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[5.14( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.17( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.13( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[2.11( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266854286s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958225250s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[5.11( empty local-lis/les=0/0 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266841888s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958225250s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284462929s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976142883s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284450531s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976142883s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266247749s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958084106s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266237259s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958084106s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284382820s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976264954s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284361839s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976264954s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266141891s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958072662s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266132355s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958072662s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266028404s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958084106s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284206390s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976264954s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266015053s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958084106s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284194946s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976264954s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265874863s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958015442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265864372s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958015442s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.266007423s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958179474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265996933s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958179474s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284914017s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977138519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284904480s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977138519s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265877724s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958160400s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284804344s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977115631s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265867233s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958160400s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284794807s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977115631s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265625954s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958011627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265614510s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958011627s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284768105s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977180481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284757614s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977180481s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284683228s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977157593s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284672737s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977157593s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284670830s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977180481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284660339s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977180481s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265351295s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957958221s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265339851s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957958221s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265258789s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957885742s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265247345s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957885742s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284502029s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977214813s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284492493s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977214813s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265030861s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957862854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.265020370s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957862854s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283348083s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976219177s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284305573s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977222443s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283327103s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976219177s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284295082s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977222443s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264868736s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957881927s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284204483s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977230072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264857292s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957881927s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284193993s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977230072s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264738083s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957839966s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270820618s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968608856s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=0/0 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287983894s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985794067s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287969589s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985763550s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287973404s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985794067s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270705223s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968540192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270693779s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968540192s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288050652s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985950470s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288040161s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985950470s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271038055s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.969043732s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288059235s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986072540s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.271027565s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.969043732s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.288045883s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986072540s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270983696s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.969047546s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270970345s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.969047546s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287842751s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.985996246s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270167351s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968326569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287830353s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.985996246s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.270154953s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968326569s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287767410s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986015320s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269947052s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968212128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287756920s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986015320s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269937515s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968212128s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287714005s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986038208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287701607s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986038208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269869804s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968242645s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287649155s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986038208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287636757s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986038208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269750595s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968173981s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269803047s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968242645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287533760s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986038208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287524223s) [0] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986038208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269651413s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968173981s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269672394s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968231201s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269659996s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968231201s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287481308s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986087799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287470818s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986087799s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269401550s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968040466s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269389153s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968040466s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287393570s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986053467s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287382126s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986053467s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269335747s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968025208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269327164s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968025208s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269232750s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.967975616s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269223213s) [1] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.967975616s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268044472s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966827393s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287295341s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986083984s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268028259s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966827393s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287283897s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986083984s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268096924s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966949463s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.268087387s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966949463s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267945290s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966838837s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287194252s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986091614s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287183762s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986091614s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267930984s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966838837s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287652969s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986606598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287644386s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986606598s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267802238s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.966815948s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.267787933s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.966815948s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287344933s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 42.986442566s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=28/29 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.287312508s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 42.986442566s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269187927s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 active pruub 47.968326569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=24/26 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33 pruub=13.269170761s) [0] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 unknown NOTIFY pruub 47.968326569s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264728546s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957839966s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264863968s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957996368s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264854431s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957996368s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263738632s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956951141s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284156799s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977390289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263729095s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956951141s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.284144402s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977390289s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264417648s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.957714081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264406204s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.957714081s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283899307s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977249146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283886909s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977249146s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263356209s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956760406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263347626s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956760406s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263430595s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956874847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263421059s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956874847s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283769608s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977279663s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283742905s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977279663s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283679008s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977294922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264464378s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.958095551s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.12( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.10( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.f( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.d( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.c( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.e( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.2( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.4( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.17( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.6( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.b( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283666611s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977294922s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.264445305s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.958095551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283563614s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977302551s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283623695s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.977371216s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283553123s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977302551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.4( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.283612251s) [1] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.977371216s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263423920s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 54.956939697s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33 pruub=14.263049126s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 54.956939697s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.7( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1e( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1c( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[6.1d( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.292141914s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842533112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.292124748s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842533112s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257698059s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808376312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257686615s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808376312s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291768074s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842552185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291752815s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842552185s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.18( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.15( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.13( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.278313637s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 active pruub 48.976139069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=28/29 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33 pruub=8.278292656s) [2] r=-1 lpr=33 pi=[28,33)/1 crt=0'0 unknown NOTIFY pruub 48.976139069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257464409s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808376312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257449150s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808376312s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257287025s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808353424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257276535s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808353424s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291619301s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842761993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291609764s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842761993s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257050514s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808311462s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257040977s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808311462s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257040977s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808380127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.257032394s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808380127s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291224480s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842632294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.291180611s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842632294s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256767273s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808307648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256759644s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808307648s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256684303s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808303833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256677628s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808303833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290964127s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842658997s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290955544s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842658997s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290888786s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842670441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290882111s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842670441s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256412506s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808273315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.256403923s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808273315s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290736198s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842674255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290728569s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842674255s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290653229s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842685699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.11( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.13( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.11( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.e( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.f( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1a( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.a( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.8( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1b( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.1f( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[4.1c( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[6.14( empty local-lis/les=0/0 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290645599s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842685699s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290569305s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842693329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290561676s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842693329s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290488243s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842708588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290479660s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842708588s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255922318s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808227539s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255914688s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808227539s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290238380s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842723846s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.290227890s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842723846s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255731583s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808303833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255723000s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808303833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255573273s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808235168s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255566597s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808235168s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289997101s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842739105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289990425s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842739105s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255416870s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808231354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.255409241s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808231354s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289930344s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842807770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289922714s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842807770s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253518105s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806476593s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253510475s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806476593s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289686203s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842742920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289677620s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842742920s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252987862s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806118011s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252979279s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806118011s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289563179s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842761993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289554596s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842761993s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252856255s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806114197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252847672s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806114197s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253200531s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806533813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.253192902s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806533813s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289352417s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842777252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289342880s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842777252s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289284706s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842792511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289276123s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842792511s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252532959s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806114197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252525330s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806114197s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289145470s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842792511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252443314s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806095123s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289132118s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842792511s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252432823s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806095123s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289058685s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842807770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252333641s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806098938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252324104s) [2] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806098938s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.289048195s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842807770s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.288977623s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 active pruub 48.842823029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=30/31 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=11.288969040s) [0] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 unknown NOTIFY pruub 48.842823029s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252244949s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.806110382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252229691s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.806110382s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252530098s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 active pruub 52.808372498s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:14:53 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=26/28 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=15.252507210s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 unknown NOTIFY pruub 52.808372498s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:14:53 compute-0 ceph-mon[74928]: pgmap v60: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: 2.1d scrub starts
Dec 13 07:14:53 compute-0 ceph-mon[74928]: 2.1d scrub ok
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 07:14:53 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4247453847' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.13( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.9( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.f( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.6( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.4( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.3( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.1f( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.18( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[7.1b( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.1c( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.18( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.16( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.11( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.11( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.15( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.e( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.a( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.8( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.5( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.5( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.1( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.2( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.7( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.8( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.c( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.e( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.1d( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[3.1e( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 33 pg[7.1a( empty local-lis/les=0/0 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:53 compute-0 sudo[92551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijgrgudsvlhfvsfvwucegqdqwrneibsr ; /usr/bin/python3'
Dec 13 07:14:53 compute-0 sudo[92551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:53 compute-0 nice_ptolemy[92516]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:14:53 compute-0 nice_ptolemy[92516]: --> All data devices are unavailable
Dec 13 07:14:53 compute-0 python3[92553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:53 compute-0 systemd[1]: libpod-e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096.scope: Deactivated successfully.
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.51609433 +0000 UTC m=+0.491482158 container died e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4214272df158ed85bd6c6b696971883a186c28eab5d769d9956596244d114aaa-merged.mount: Deactivated successfully.
Dec 13 07:14:53 compute-0 podman[92502]: 2025-12-13 07:14:53.53981882 +0000 UTC m=+0.515206628 container remove e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:14:53 compute-0 systemd[1]: libpod-conmon-e03f1c73fbef2eaa47cb26e19a1c44de0ba91db77530d371697a286558d61096.scope: Deactivated successfully.
Dec 13 07:14:53 compute-0 podman[92563]: 2025-12-13 07:14:53.553516525 +0000 UTC m=+0.040353666 container create 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:14:53 compute-0 sudo[92420]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:53 compute-0 systemd[1]: Started libpod-conmon-4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c.scope.
Dec 13 07:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d753cdbddc33c10ec3a66780ab30219b024850be73f4c35c3d34b958efcd13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d753cdbddc33c10ec3a66780ab30219b024850be73f4c35c3d34b958efcd13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:53 compute-0 podman[92563]: 2025-12-13 07:14:53.608193242 +0000 UTC m=+0.095030393 container init 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 07:14:53 compute-0 podman[92563]: 2025-12-13 07:14:53.612854863 +0000 UTC m=+0.099692003 container start 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:53 compute-0 podman[92563]: 2025-12-13 07:14:53.614254163 +0000 UTC m=+0.101091304 container attach 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 07:14:53 compute-0 sudo[92588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:53 compute-0 sudo[92588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:53 compute-0 sudo[92588]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:53 compute-0 podman[92563]: 2025-12-13 07:14:53.538040237 +0000 UTC m=+0.024877399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:53 compute-0 sudo[92615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:14:53 compute-0 sudo[92615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.899013451 +0000 UTC m=+0.028231322 container create bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:53 compute-0 systemd[1]: Started libpod-conmon-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope.
Dec 13 07:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.943825747 +0000 UTC m=+0.073043647 container init bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.947888533 +0000 UTC m=+0.077106413 container start bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.95002527 +0000 UTC m=+0.079243170 container attach bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:53 compute-0 affectionate_gates[92683]: 167 167
Dec 13 07:14:53 compute-0 systemd[1]: libpod-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope: Deactivated successfully.
Dec 13 07:14:53 compute-0 conmon[92683]: conmon bb2cba6456d4d48641d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope/container/memory.events
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.952842114 +0000 UTC m=+0.082059993 container died bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e6a91a0a6ebf93ede983b569a00a25183a219edfe71426e052bc24c6225056-merged.mount: Deactivated successfully.
Dec 13 07:14:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.972974443 +0000 UTC m=+0.102192323 container remove bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:53 compute-0 podman[92670]: 2025-12-13 07:14:53.887228221 +0000 UTC m=+0.016446122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 13 07:14:53 compute-0 systemd[1]: libpod-conmon-bb2cba6456d4d48641d808596a433562222b8bf4a944c93d71ed69370f85546f.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 07:14:54 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595908538' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:14:54 compute-0 eager_agnesi[92586]: 
Dec 13 07:14:54 compute-0 eager_agnesi[92586]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":91,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1765610062,"num_in_osds":3,"osd_in_since":1765610047,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83853312,"bytes_avail":64328073216,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-13T07:14:50:410071+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T07:14:40.194918+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"cf19f399-bc4c-48de-b8a2-1b3512091033":{"message":"Global Recovery Event (5s)\n      [=======================.....] ","progress":0.8393782377243042,"add_to_ceph_s":true}}}
Dec 13 07:14:54 compute-0 systemd[1]: libpod-4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 podman[92563]: 2025-12-13 07:14:54.02029373 +0000 UTC m=+0.507130870 container died 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 07:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d753cdbddc33c10ec3a66780ab30219b024850be73f4c35c3d34b958efcd13-merged.mount: Deactivated successfully.
Dec 13 07:14:54 compute-0 podman[92563]: 2025-12-13 07:14:54.043800932 +0000 UTC m=+0.530638073 container remove 4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c (image=quay.io/ceph/ceph:v20, name=eager_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:54 compute-0 systemd[1]: libpod-conmon-4b9fac4f838135e7e15a07621a7c2287983fe10e6027eef6b3dc6fe47fdd690c.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 sudo[92551]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:54 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event cf19f399-bc4c-48de-b8a2-1b3512091033 (Global Recovery Event) in 10 seconds
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.097920992 +0000 UTC m=+0.029288780 container create 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:14:54 compute-0 systemd[1]: Started libpod-conmon-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope.
Dec 13 07:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410aa50d47a33a22361fcbcaba30615cc1efa0350e897edf638710efa68935cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.15021075 +0000 UTC m=+0.081578559 container init 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.156905793 +0000 UTC m=+0.088273593 container start 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.159665701 +0000 UTC m=+0.091033519 container attach 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 07:14:54 compute-0 sudo[92757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphdubgiymzmhhatnpuxldqmgqwiegrs ; /usr/bin/python3'
Dec 13 07:14:54 compute-0 sudo[92757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.08575747 +0000 UTC m=+0.017125269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 13 07:14:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 13 07:14:54 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.15( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.13( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.1b( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.f( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.3( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.2( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.6( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.9( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[5.7( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [0] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.4( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.1f( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [0] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 34 pg[7.18( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [0] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v63: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.d( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.f( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.d( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.2( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.4( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.6( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.4( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.7( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.5( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.b( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.9( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.16( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.17( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.12( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.11( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.c( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1d( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1c( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.e( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[4.10( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.18( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.1c( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.16( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.1f( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.11( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.11( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.13( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.11( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.13( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.15( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.15( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.14( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.a( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.e( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.8( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.11( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.5( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.2( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.1( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.a( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.8( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.e( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.7( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.8( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.e( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[6.f( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [2] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.c( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.1d( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.5( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=24/17 lis/c=24/24 les/c/f=26/26/0 sis=33) [1] r=0 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.f( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.1( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1a( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.1b( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[4.18( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.18( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=28/20 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 34 pg[6.1e( empty local-lis/les=33/34 n=0 ec=28/21 lis/c=28/28 les/c/f=29/29/0 sis=33) [1] r=0 lpr=33 pi=[28,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[3.1e( empty local-lis/les=33/34 n=0 ec=26/18 lis/c=26/26 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 34 pg[7.1a( empty local-lis/les=33/34 n=0 ec=30/22 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:14:54 compute-0 ceph-mon[74928]: osdmap e33: 3 total, 3 up, 3 in
Dec 13 07:14:54 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/595908538' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:14:54 compute-0 ceph-mon[74928]: osdmap e34: 3 total, 3 up, 3 in
Dec 13 07:14:54 compute-0 python3[92759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.341869836 +0000 UTC m=+0.029730580 container create 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:14:54 compute-0 systemd[1]: Started libpod-conmon-0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6.scope.
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]: {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:     "0": [
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:         {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "devices": [
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "/dev/loop3"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             ],
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_name": "ceph_lv0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_size": "21470642176",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "name": "ceph_lv0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "tags": {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.crush_device_class": "",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.encrypted": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osd_id": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.type": "block",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.vdo": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.with_tpm": "0"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             },
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "type": "block",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "vg_name": "ceph_vg0"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:         }
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:     ],
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:     "1": [
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:         {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "devices": [
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "/dev/loop4"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             ],
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_name": "ceph_lv1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_size": "21470642176",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "name": "ceph_lv1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "tags": {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.crush_device_class": "",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.encrypted": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osd_id": "1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.type": "block",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.vdo": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.with_tpm": "0"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             },
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "type": "block",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "vg_name": "ceph_vg1"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:         }
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:     ],
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:     "2": [
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:         {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "devices": [
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "/dev/loop5"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             ],
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_name": "ceph_lv2",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_size": "21470642176",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "name": "ceph_lv2",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "tags": {
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.cluster_name": "ceph",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.crush_device_class": "",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.encrypted": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.objectstore": "bluestore",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osd_id": "2",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.type": "block",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.vdo": "0",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:                 "ceph.with_tpm": "0"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             },
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "type": "block",
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:             "vg_name": "ceph_vg2"
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:         }
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]:     ]
Dec 13 07:14:54 compute-0 unruffled_elbakyan[92729]: }
Dec 13 07:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7ce76781858ffbb41eae0b615e3d8a947c34030a3da43a017e46573808d84c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7ce76781858ffbb41eae0b615e3d8a947c34030a3da43a017e46573808d84c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.396136471 +0000 UTC m=+0.083997235 container init 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:54 compute-0 systemd[1]: libpod-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 conmon[92729]: conmon 95bf82c5f8494264d329 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope/container/memory.events
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.402336444 +0000 UTC m=+0.090197188 container start 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.403630446 +0000 UTC m=+0.091491180 container attach 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.404180179 +0000 UTC m=+0.335547979 container died 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.330904336 +0000 UTC m=+0.018765100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:54 compute-0 podman[92716]: 2025-12-13 07:14:54.430306124 +0000 UTC m=+0.361673923 container remove 95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_elbakyan, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:14:54 compute-0 systemd[1]: libpod-conmon-95bf82c5f8494264d3296b62a26d6610b28c4a0d4a6c8cbeff48728c64a278d5.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 sudo[92615]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:54 compute-0 sudo[92788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:54 compute-0 sudo[92788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:54 compute-0 sudo[92788]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:54 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 13 07:14:54 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 13 07:14:54 compute-0 sudo[92832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:14:54 compute-0 sudo[92832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.776802988 +0000 UTC m=+0.026516218 container create 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 07:14:54 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008101709' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 07:14:54 compute-0 systemd[1]: Started libpod-conmon-7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6.scope.
Dec 13 07:14:54 compute-0 vigilant_bhaskara[92776]: 
Dec 13 07:14:54 compute-0 vigilant_bhaskara[92776]: {"epoch":1,"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","modified":"2025-12-13T07:13:19.809500Z","created":"2025-12-13T07:13:19.809500Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 13 07:14:54 compute-0 vigilant_bhaskara[92776]: dumped monmap epoch 1
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.81183939 +0000 UTC m=+0.499700134 container died 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:54 compute-0 systemd[1]: libpod-0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.820957227 +0000 UTC m=+0.070670477 container init 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.825946855 +0000 UTC m=+0.075660085 container start 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e7ce76781858ffbb41eae0b615e3d8a947c34030a3da43a017e46573808d84c-merged.mount: Deactivated successfully.
Dec 13 07:14:54 compute-0 compassionate_mestorf[92881]: 167 167
Dec 13 07:14:54 compute-0 systemd[1]: libpod-7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.831914431 +0000 UTC m=+0.081627661 container attach 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.832234393 +0000 UTC m=+0.081947643 container died 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:54 compute-0 podman[92760]: 2025-12-13 07:14:54.837936329 +0000 UTC m=+0.525797073 container remove 0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6 (image=quay.io/ceph/ceph:v20, name=vigilant_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:54 compute-0 systemd[1]: libpod-conmon-0fc64db874ba73da58fd174898086b6538a68e42ede29f2c41d2048cff7ab4d6.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-62307fff621a39dbbc14a1bb8f2eed0a2bdf121fe0aef0058b081c924d87362f-merged.mount: Deactivated successfully.
Dec 13 07:14:54 compute-0 sudo[92757]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.857963371 +0000 UTC m=+0.107676601 container remove 7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:14:54 compute-0 podman[92867]: 2025-12-13 07:14:54.766658722 +0000 UTC m=+0.016371962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:54 compute-0 systemd[1]: libpod-conmon-7a7307296d333ff74b40340fe94bb16c8c0d4a2210fb67f981172ca565920ad6.scope: Deactivated successfully.
Dec 13 07:14:54 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 13 07:14:54 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 13 07:14:54 compute-0 podman[92913]: 2025-12-13 07:14:54.968912481 +0000 UTC m=+0.028129231 container create b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:54 compute-0 systemd[1]: Started libpod-conmon-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope.
Dec 13 07:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:55 compute-0 podman[92913]: 2025-12-13 07:14:55.008562364 +0000 UTC m=+0.067779133 container init b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:14:55 compute-0 podman[92913]: 2025-12-13 07:14:55.015064114 +0000 UTC m=+0.074280863 container start b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:55 compute-0 podman[92913]: 2025-12-13 07:14:55.016053653 +0000 UTC m=+0.075270403 container attach b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:14:55 compute-0 podman[92913]: 2025-12-13 07:14:54.958250001 +0000 UTC m=+0.017466770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:55 compute-0 sudo[92954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrjmvxnumglvhnzfldwljqtffkzpmvja ; /usr/bin/python3'
Dec 13 07:14:55 compute-0 sudo[92954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: 7.1e scrub starts
Dec 13 07:14:55 compute-0 ceph-mon[74928]: 7.1e scrub ok
Dec 13 07:14:55 compute-0 ceph-mon[74928]: pgmap v63: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:55 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1008101709' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 07:14:55 compute-0 python3[92956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:55 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 13 07:14:55 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.315383788 +0000 UTC m=+0.042534154 container create 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:55 compute-0 systemd[1]: Started libpod-conmon-4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a.scope.
Dec 13 07:14:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cf03bf2dadab4b291d72c95ff85c29ad2806389524588287c2d5609524aa2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cf03bf2dadab4b291d72c95ff85c29ad2806389524588287c2d5609524aa2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.30057246 +0000 UTC m=+0.027722826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.399940474 +0000 UTC m=+0.127090860 container init 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.404203065 +0000 UTC m=+0.131353431 container start 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.405581536 +0000 UTC m=+0.132731962 container attach 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:14:55 compute-0 lvm[93064]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:14:55 compute-0 lvm[93065]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:14:55 compute-0 lvm[93064]: VG ceph_vg0 finished
Dec 13 07:14:55 compute-0 lvm[93065]: VG ceph_vg1 finished
Dec 13 07:14:55 compute-0 lvm[93068]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:14:55 compute-0 lvm[93068]: VG ceph_vg2 finished
Dec 13 07:14:55 compute-0 boring_albattani[92926]: {}
Dec 13 07:14:55 compute-0 systemd[1]: libpod-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope: Deactivated successfully.
Dec 13 07:14:55 compute-0 conmon[92926]: conmon b8bcb33142e5a6c09a80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope/container/memory.events
Dec 13 07:14:55 compute-0 podman[93071]: 2025-12-13 07:14:55.62880108 +0000 UTC m=+0.017750854 container died b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-506ba23af6da1ee6784a1ebc4adea43abc07a2e956a59fe8837ec08957128472-merged.mount: Deactivated successfully.
Dec 13 07:14:55 compute-0 podman[93071]: 2025-12-13 07:14:55.649815026 +0000 UTC m=+0.038764791 container remove b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:55 compute-0 systemd[1]: libpod-conmon-b8bcb33142e5a6c09a80b7189d4963db35608773ef024ddd231c43c4d3658c83.scope: Deactivated successfully.
Dec 13 07:14:55 compute-0 sudo[92832]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:55 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 2fdfac95-ecea-4b6d-9d0b-497dbccd0217 (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 07:14:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 07:14:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:55 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.kikquh on compute-0
Dec 13 07:14:55 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.kikquh on compute-0
Dec 13 07:14:55 compute-0 sudo[93082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:55 compute-0 sudo[93082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:55 compute-0 sudo[93082]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:55 compute-0 sudo[93107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:14:55 compute-0 sudo[93107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 13 07:14:55 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3324588932' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 07:14:55 compute-0 vigilant_davinci[93025]: [client.openstack]
Dec 13 07:14:55 compute-0 vigilant_davinci[93025]:         key = AQDvET1pAAAAABAAXRTVwZkpmvDiKzdXsEX84w==
Dec 13 07:14:55 compute-0 vigilant_davinci[93025]:         caps mgr = "allow *"
Dec 13 07:14:55 compute-0 vigilant_davinci[93025]:         caps mon = "profile rbd"
Dec 13 07:14:55 compute-0 vigilant_davinci[93025]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 13 07:14:55 compute-0 systemd[1]: libpod-4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a.scope: Deactivated successfully.
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.825738306 +0000 UTC m=+0.552888672 container died 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00cf03bf2dadab4b291d72c95ff85c29ad2806389524588287c2d5609524aa2-merged.mount: Deactivated successfully.
Dec 13 07:14:55 compute-0 podman[92978]: 2025-12-13 07:14:55.848211615 +0000 UTC m=+0.575361981 container remove 4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a (image=quay.io/ceph/ceph:v20, name=vigilant_davinci, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:55 compute-0 systemd[1]: libpod-conmon-4b8ac5311c98732cb6584cf13a8175006eda42dbd9556540f3752cfaa968231a.scope: Deactivated successfully.
Dec 13 07:14:55 compute-0 sudo[92954]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.086709315 +0000 UTC m=+0.025708250 container create f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:14:56 compute-0 systemd[1]: Started libpod-conmon-f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d.scope.
Dec 13 07:14:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.132166342 +0000 UTC m=+0.071165287 container init f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.136615083 +0000 UTC m=+0.075614019 container start f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:14:56 compute-0 nice_kalam[93190]: 167 167
Dec 13 07:14:56 compute-0 systemd[1]: libpod-f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d.scope: Deactivated successfully.
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.140835396 +0000 UTC m=+0.079834351 container attach f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.141043277 +0000 UTC m=+0.080042212 container died f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f7eab366051ed7ed9dae7d44c9bebec0e2e9bb2bd4f8865464fb78a5d3e2c4-merged.mount: Deactivated successfully.
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.158341869 +0000 UTC m=+0.097340804 container remove f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kalam, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 07:14:56 compute-0 podman[93177]: 2025-12-13 07:14:56.076223547 +0000 UTC m=+0.015222502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:56 compute-0 systemd[1]: libpod-conmon-f10896be36c6b99e3bbe0052972ed2a3f15dc192d4c6f0621d0d3155a6f3a40d.scope: Deactivated successfully.
Dec 13 07:14:56 compute-0 systemd[1]: Reloading.
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v64: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:56 compute-0 ceph-mon[74928]: 5.1c scrub starts
Dec 13 07:14:56 compute-0 ceph-mon[74928]: 5.1c scrub ok
Dec 13 07:14:56 compute-0 ceph-mon[74928]: 3.1a scrub starts
Dec 13 07:14:56 compute-0 ceph-mon[74928]: 3.1a scrub ok
Dec 13 07:14:56 compute-0 ceph-mon[74928]: 2.1a scrub starts
Dec 13 07:14:56 compute-0 ceph-mon[74928]: 2.1a scrub ok
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kikquh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:56 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3324588932' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 07:14:56 compute-0 systemd-sysv-generator[93233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:56 compute-0 systemd-rc-local-generator[93224]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:56 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 13 07:14:56 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 13 07:14:56 compute-0 systemd[1]: Reloading.
Dec 13 07:14:56 compute-0 systemd-sysv-generator[93269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:56 compute-0 systemd-rc-local-generator[93266]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:56 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.kikquh for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:14:56 compute-0 podman[93422]: 2025-12-13 07:14:56.810903878 +0000 UTC m=+0.027521908 container create 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-rgw-rgw-compute-0-kikquh, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 07:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e333fa43a805bfdea08e5fddc8346c8cf7d0194486896d26b03368e3d204987f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.kikquh supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:56 compute-0 podman[93422]: 2025-12-13 07:14:56.849930158 +0000 UTC m=+0.066548198 container init 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-rgw-rgw-compute-0-kikquh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:56 compute-0 podman[93422]: 2025-12-13 07:14:56.854302286 +0000 UTC m=+0.070920315 container start 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-rgw-rgw-compute-0-kikquh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 07:14:56 compute-0 bash[93422]: 69ac193e949f4ec1c85ec7f6eeac603563ef58b890ae60370ce1d5f216a4080c
Dec 13 07:14:56 compute-0 podman[93422]: 2025-12-13 07:14:56.799794798 +0000 UTC m=+0.016412848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:56 compute-0 sudo[93486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmbhypdsdfqbpamlxqbiogiioqmdexkh ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610096.6256976-37138-243185074326914/async_wrapper.py j932427807995 30 /home/zuul/.ansible/tmp/ansible-tmp-1765610096.6256976-37138-243185074326914/AnsiballZ_command.py _'
Dec 13 07:14:56 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.kikquh for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:14:56 compute-0 sudo[93486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:56 compute-0 radosgw[93487]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:14:56 compute-0 radosgw[93487]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec 13 07:14:56 compute-0 radosgw[93487]: framework: beast
Dec 13 07:14:56 compute-0 radosgw[93487]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 13 07:14:56 compute-0 radosgw[93487]: init_numa not setting numa affinity
Dec 13 07:14:56 compute-0 sudo[93107]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 2fdfac95-ecea-4b6d-9d0b-497dbccd0217 (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 2fdfac95-ecea-4b6d-9d0b-497dbccd0217 (Updating rgw.rgw deployment (+1 -> 1)) in 1 seconds
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 3f96108e-8b09-4845-98c0-5ac9311cc03e (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 07:14:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:56 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zwnyoz on compute-0
Dec 13 07:14:56 compute-0 ceph-mgr[75200]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zwnyoz on compute-0
Dec 13 07:14:56 compute-0 sudo[93518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:56 compute-0 sudo[93518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:56 compute-0 sudo[93518]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:56 compute-0 ansible-async_wrapper.py[93489]: Invoked with j932427807995 30 /home/zuul/.ansible/tmp/ansible-tmp-1765610096.6256976-37138-243185074326914/AnsiballZ_command.py _
Dec 13 07:14:56 compute-0 ansible-async_wrapper.py[93568]: Starting module and watcher
Dec 13 07:14:56 compute-0 ansible-async_wrapper.py[93568]: Start watching 93569 (30)
Dec 13 07:14:57 compute-0 ansible-async_wrapper.py[93569]: Start module (93569)
Dec 13 07:14:57 compute-0 ansible-async_wrapper.py[93489]: Return async_wrapper task started.
Dec 13 07:14:57 compute-0 sudo[93543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
Dec 13 07:14:57 compute-0 sudo[93543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:57 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 13 07:14:57 compute-0 sudo[93486]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:57 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 13 07:14:57 compute-0 python3[93572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.167176357 +0000 UTC m=+0.028804460 container create 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:57 compute-0 systemd[1]: Started libpod-conmon-8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7.scope.
Dec 13 07:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ddf86cd2100e3e80d23f6321295c0cd3a261e381695184d4d4290bf5c7893fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ddf86cd2100e3e80d23f6321295c0cd3a261e381695184d4d4290bf5c7893fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.220478088 +0000 UTC m=+0.082106202 container init 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:57 compute-0 ceph-mon[74928]: Deploying daemon rgw.rgw.compute-0.kikquh on compute-0
Dec 13 07:14:57 compute-0 ceph-mon[74928]: pgmap v64: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:57 compute-0 ceph-mon[74928]: 6.1a scrub starts
Dec 13 07:14:57 compute-0 ceph-mon[74928]: 6.1a scrub ok
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zwnyoz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 07:14:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.227395188 +0000 UTC m=+0.089023291 container start 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.233719696 +0000 UTC m=+0.095347809 container attach 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.156222158 +0000 UTC m=+0.017850271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.336155364 +0000 UTC m=+0.035409704 container create 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:57 compute-0 systemd[1]: Started libpod-conmon-099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f.scope.
Dec 13 07:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.388724709 +0000 UTC m=+0.087979069 container init 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.393366424 +0000 UTC m=+0.092620764 container start 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.395673469 +0000 UTC m=+0.094927830 container attach 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:57 compute-0 festive_cannon[93657]: 167 167
Dec 13 07:14:57 compute-0 systemd[1]: libpod-099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f.scope: Deactivated successfully.
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.397397891 +0000 UTC m=+0.096652241 container died 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4febb36d8f08c0f19621df737b92154125202a610a9adf0bf160b70a89699d4-merged.mount: Deactivated successfully.
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.318623052 +0000 UTC m=+0.017877402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:57 compute-0 podman[93625]: 2025-12-13 07:14:57.416992229 +0000 UTC m=+0.116246569 container remove 099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 07:14:57 compute-0 systemd[1]: libpod-conmon-099fea401222931ce446c8e41dcf83c5c318918889bec2bedc3606f760a6084f.scope: Deactivated successfully.
Dec 13 07:14:57 compute-0 systemd[1]: Reloading.
Dec 13 07:14:57 compute-0 systemd-rc-local-generator[93693]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:57 compute-0 systemd-sysv-generator[93699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:57 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:14:57 compute-0 naughty_raman[93593]: 
Dec 13 07:14:57 compute-0 naughty_raman[93593]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.583802472 +0000 UTC m=+0.445430585 container died 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:14:57 compute-0 systemd[1]: libpod-8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7.scope: Deactivated successfully.
Dec 13 07:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ddf86cd2100e3e80d23f6321295c0cd3a261e381695184d4d4290bf5c7893fa-merged.mount: Deactivated successfully.
Dec 13 07:14:57 compute-0 podman[93573]: 2025-12-13 07:14:57.673519484 +0000 UTC m=+0.535147588 container remove 8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7 (image=quay.io/ceph/ceph:v20, name=naughty_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:57 compute-0 systemd[1]: libpod-conmon-8532d33052f7cc55b77631d2784e20c17fcd52bd9677fa2c17af423fb360c8d7.scope: Deactivated successfully.
Dec 13 07:14:57 compute-0 ansible-async_wrapper.py[93569]: Module complete (93569)
Dec 13 07:14:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:14:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 13 07:14:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 13 07:14:57 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 13 07:14:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 13 07:14:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 07:14:57 compute-0 systemd[1]: Reloading.
Dec 13 07:14:57 compute-0 systemd-rc-local-generator[93748]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:14:57 compute-0 systemd-sysv-generator[93751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:14:57 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.zwnyoz for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de...
Dec 13 07:14:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:58 compute-0 podman[93821]: 2025-12-13 07:14:58.10377739 +0000 UTC m=+0.031613199 container create c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:14:58 compute-0 sudo[93855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asyuacfhtvrhhkpfhsrkcwbzrfizdufs ; /usr/bin/python3'
Dec 13 07:14:58 compute-0 sudo[93855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02c05f4f9aa98d6346eecda97a1eaa8bcc35c64dca7a62c6f94dc4c5161700/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zwnyoz supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:58 compute-0 podman[93821]: 2025-12-13 07:14:58.154740967 +0000 UTC m=+0.082576805 container init c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:58 compute-0 podman[93821]: 2025-12-13 07:14:58.158996696 +0000 UTC m=+0.086832505 container start c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:14:58 compute-0 bash[93821]: c65ab07d188ff0da2857402181db229477778de43e052dbfea38f8c212c47f50
Dec 13 07:14:58 compute-0 podman[93821]: 2025-12-13 07:14:58.089733445 +0000 UTC m=+0.017569273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:58 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.zwnyoz for 00fdae1b-7fad-5f1b-8734-ba4d9298a6de.
Dec 13 07:14:58 compute-0 ceph-mds[93864]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:14:58 compute-0 ceph-mds[93864]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 13 07:14:58 compute-0 ceph-mds[93864]: main not setting numa affinity
Dec 13 07:14:58 compute-0 ceph-mds[93864]: pidfile_write: ignore empty --pid-file
Dec 13 07:14:58 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz[93860]: starting mds.cephfs.compute-0.zwnyoz at 
Dec 13 07:14:58 compute-0 sudo[93543]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v66: 194 pgs: 1 unknown, 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:58 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 2 from mon.0
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 3f96108e-8b09-4845-98c0-5ac9311cc03e (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 07:14:58 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 3f96108e-8b09-4845-98c0-5ac9311cc03e (Updating mds.cephfs deployment (+1 -> 1)) in 1 seconds
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: Saving service rgw.rgw spec with placement compute-0
Dec 13 07:14:58 compute-0 ceph-mon[74928]: Deploying daemon mds.cephfs.compute-0.zwnyoz on compute-0
Dec 13 07:14:58 compute-0 ceph-mon[74928]: 7.1d scrub starts
Dec 13 07:14:58 compute-0 ceph-mon[74928]: 7.1d scrub ok
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:14:58 compute-0 ceph-mon[74928]: osdmap e35: 3 total, 3 up, 3 in
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:58 compute-0 python3[93859]: ansible-ansible.legacy.async_status Invoked with jid=j932427807995.93489 mode=status _async_dir=/root/.ansible_async
Dec 13 07:14:58 compute-0 sudo[93883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:14:58 compute-0 sudo[93883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:58 compute-0 sudo[93855]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:58 compute-0 sudo[93883]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:58 compute-0 sudo[93908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:58 compute-0 sudo[93908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:58 compute-0 sudo[93908]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:58 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 13 07:14:58 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 13 07:14:58 compute-0 sudo[93956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:14:58 compute-0 sudo[93956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:58 compute-0 sudo[94002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rguyadylodmilmxubrclhxkkcjfrfoyu ; /usr/bin/python3'
Dec 13 07:14:58 compute-0 sudo[94002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:58 compute-0 python3[94006]: ansible-ansible.legacy.async_status Invoked with jid=j932427807995.93489 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 07:14:58 compute-0 sudo[94002]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:58 compute-0 podman[94041]: 2025-12-13 07:14:58.680523703 +0000 UTC m=+0.040004370 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 07:14:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 13 07:14:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:14:58 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 13 07:14:58 compute-0 podman[94041]: 2025-12-13 07:14:58.769748392 +0000 UTC m=+0.129229059 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:58 compute-0 sudo[94662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmcixjdlbbehcenvmsbykoeiicdkxofg ; /usr/bin/python3'
Dec 13 07:14:58 compute-0 sudo[94662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:14:58 compute-0 python3[94666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:14:58 compute-0 podman[94705]: 2025-12-13 07:14:58.999612615 +0000 UTC m=+0.030389990 container create 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:59 compute-0 systemd[1]: Started libpod-conmon-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope.
Dec 13 07:14:59 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 13 07:14:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07504719ca99500a076fde8787a4c051858b21071ad1341a4e32a4681ab3e2ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07504719ca99500a076fde8787a4c051858b21071ad1341a4e32a4681ab3e2ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 13 07:14:59 compute-0 podman[94705]: 2025-12-13 07:14:59.054785483 +0000 UTC m=+0.085562878 container init 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:59 compute-0 podman[94705]: 2025-12-13 07:14:59.060552912 +0000 UTC m=+0.091330288 container start 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:59 compute-0 podman[94705]: 2025-12-13 07:14:59.062304094 +0000 UTC m=+0.093081469 container attach 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:59 compute-0 podman[94705]: 2025-12-13 07:14:58.986569951 +0000 UTC m=+0.017347336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:14:59 compute-0 ceph-mgr[75200]: [progress INFO root] Writing back 12 completed events
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 new map
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-13T07:14:59:201141+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T07:14:50.409831+0000
                                           modified        2025-12-13T07:14:50.409831+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zwnyoz{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] compat {c=[1],r=[1],i=[1fff]}]
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 3 from mon.0
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Monitors have assigned me to become a standby
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] up:boot
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] as mds.0
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zwnyoz assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zwnyoz"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.zwnyoz"} : dispatch
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e3 all = 0
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e4 new map
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-13T07:14:59:205999+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T07:14:50.409831+0000
                                           modified        2025-12-13T07:14:59.205991+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.zwnyoz{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zwnyoz=up:creating}
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 4 from mon.0
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x1
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x100
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x600
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x601
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x602
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x603
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x604
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x605
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x606
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x607
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x608
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.cache creating system inode with ino:0x609
Dec 13 07:14:59 compute-0 ceph-mon[74928]: pgmap v66: 194 pgs: 1 unknown, 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:14:59 compute-0 ceph-mon[74928]: 5.1f scrub starts
Dec 13 07:14:59 compute-0 ceph-mon[74928]: 5.1f scrub ok
Dec 13 07:14:59 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1471949362' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 07:14:59 compute-0 ceph-mon[74928]: osdmap e36: 3 total, 3 up, 3 in
Dec 13 07:14:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mds.? [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] up:boot
Dec 13 07:14:59 compute-0 ceph-mon[74928]: daemon mds.cephfs.compute-0.zwnyoz assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: Cluster is now healthy
Dec 13 07:14:59 compute-0 ceph-mon[74928]: fsmap cephfs:0 1 up:standby
Dec 13 07:14:59 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.zwnyoz"} : dispatch
Dec 13 07:14:59 compute-0 ceph-mon[74928]: fsmap cephfs:1 {0=cephfs.compute-0.zwnyoz=up:creating}
Dec 13 07:14:59 compute-0 ceph-mds[93864]: mds.0.4 creating_done
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zwnyoz is now active in filesystem cephfs as rank 0
Dec 13 07:14:59 compute-0 sudo[93956]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:14:59 compute-0 sudo[94825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:14:59 compute-0 sudo[94825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:59 compute-0 sudo[94825]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:59 compute-0 sudo[94850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:14:59 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:14:59 compute-0 compassionate_shamir[94731]: 
Dec 13 07:14:59 compute-0 sudo[94850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:14:59 compute-0 compassionate_shamir[94731]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 07:14:59 compute-0 systemd[1]: libpod-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope: Deactivated successfully.
Dec 13 07:14:59 compute-0 conmon[94731]: conmon 631e5fc48d87819a76f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope/container/memory.events
Dec 13 07:14:59 compute-0 podman[94705]: 2025-12-13 07:14:59.433553983 +0000 UTC m=+0.464331358 container died 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-07504719ca99500a076fde8787a4c051858b21071ad1341a4e32a4681ab3e2ae-merged.mount: Deactivated successfully.
Dec 13 07:14:59 compute-0 podman[94705]: 2025-12-13 07:14:59.453744291 +0000 UTC m=+0.484521666 container remove 631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62 (image=quay.io/ceph/ceph:v20, name=compassionate_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:14:59 compute-0 sudo[94662]: pam_unix(sudo:session): session closed for user root
Dec 13 07:14:59 compute-0 systemd[1]: libpod-conmon-631e5fc48d87819a76f4b49b83ecc7bcd38daf7509c9e36b15f526b8b4fb7a62.scope: Deactivated successfully.
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.644322295 +0000 UTC m=+0.027956666 container create 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:59 compute-0 systemd[1]: Started libpod-conmon-51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7.scope.
Dec 13 07:14:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.689390241 +0000 UTC m=+0.073024622 container init 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.693743353 +0000 UTC m=+0.077377714 container start 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.694925744 +0000 UTC m=+0.078560126 container attach 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:14:59 compute-0 stoic_haibt[94911]: 167 167
Dec 13 07:14:59 compute-0 systemd[1]: libpod-51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7.scope: Deactivated successfully.
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.697726479 +0000 UTC m=+0.081360840 container died 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 13 07:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-597ad554ce66a676e2f9788cd25e052e40c0d572de37155a73580f6819a25f8d-merged.mount: Deactivated successfully.
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 13 07:14:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:14:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 13 07:14:59 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.719619086 +0000 UTC m=+0.103253447 container remove 51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:14:59 compute-0 podman[94897]: 2025-12-13 07:14:59.633165816 +0000 UTC m=+0.016800197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:14:59 compute-0 systemd[1]: libpod-conmon-51aada05e5a35dcf6f04b6108910c9d691d12ffabafeb76404cb582710a4c6a7.scope: Deactivated successfully.
Dec 13 07:14:59 compute-0 podman[94932]: 2025-12-13 07:14:59.835403935 +0000 UTC m=+0.028737384 container create b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:14:59 compute-0 systemd[1]: Started libpod-conmon-b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75.scope.
Dec 13 07:14:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:14:59 compute-0 podman[94932]: 2025-12-13 07:14:59.900070367 +0000 UTC m=+0.093403835 container init b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:14:59 compute-0 podman[94932]: 2025-12-13 07:14:59.905283295 +0000 UTC m=+0.098616743 container start b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:14:59 compute-0 podman[94932]: 2025-12-13 07:14:59.906476547 +0000 UTC m=+0.099809996 container attach b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:14:59 compute-0 podman[94932]: 2025-12-13 07:14:59.824128573 +0000 UTC m=+0.017462031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:00 compute-0 sudo[94973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgxrnxqntdijqairfwxrzvievrbuenru ; /usr/bin/python3'
Dec 13 07:15:00 compute-0 sudo[94973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:00 compute-0 python3[94977]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v69: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.22198548 +0000 UTC m=+0.030024913 container create cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:15:00 compute-0 ceph-mon[74928]: 3.19 scrub starts
Dec 13 07:15:00 compute-0 ceph-mon[74928]: 3.19 scrub ok
Dec 13 07:15:00 compute-0 ceph-mon[74928]: daemon mds.cephfs.compute-0.zwnyoz is now active in filesystem cephfs as rank 0
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: osdmap e37: 3 total, 3 up, 3 in
Dec 13 07:15:00 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e5 new map
Dec 13 07:15:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-13T07:15:00:227157+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T07:14:50.409831+0000
                                           modified        2025-12-13T07:15:00.227156+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14253 members: 14253
                                           [mds.cephfs.compute-0.zwnyoz{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 13 07:15:00 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz Updating MDS map to version 5 from mon.0
Dec 13 07:15:00 compute-0 ceph-mds[93864]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 13 07:15:00 compute-0 ceph-mds[93864]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec 13 07:15:00 compute-0 ceph-mds[93864]: mds.0.4 recovery_done -- successful recovery!
Dec 13 07:15:00 compute-0 ceph-mds[93864]: mds.0.4 active_start
Dec 13 07:15:00 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] up:active
Dec 13 07:15:00 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zwnyoz=up:active}
Dec 13 07:15:00 compute-0 systemd[1]: Started libpod-conmon-cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19.scope.
Dec 13 07:15:00 compute-0 objective_panini[94945]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:15:00 compute-0 objective_panini[94945]: --> All data devices are unavailable
Dec 13 07:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44f586ba1bc7c23302f55082e53b8073af43974baea8b816d7878fbb7d98c94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44f586ba1bc7c23302f55082e53b8073af43974baea8b816d7878fbb7d98c94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.280554271 +0000 UTC m=+0.088593694 container init cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.286499556 +0000 UTC m=+0.094538979 container start cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.287932588 +0000 UTC m=+0.095972011 container attach cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:15:00 compute-0 systemd[1]: libpod-b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75.scope: Deactivated successfully.
Dec 13 07:15:00 compute-0 podman[94932]: 2025-12-13 07:15:00.296599889 +0000 UTC m=+0.489933357 container died b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.209479686 +0000 UTC m=+0.017519129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:00 compute-0 podman[94932]: 2025-12-13 07:15:00.31836662 +0000 UTC m=+0.511700069 container remove b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:15:00 compute-0 systemd[1]: libpod-conmon-b8a87ebdcdd3f8c4f84e2efdd38c8156ed9e2f55a43ce5402daf7218c38f2b75.scope: Deactivated successfully.
Dec 13 07:15:00 compute-0 sudo[94850]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:00 compute-0 sudo[95020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:15:00 compute-0 sudo[95020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:00 compute-0 sudo[95020]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:00 compute-0 sudo[95064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:15:00 compute-0 sudo[95064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a6eb9b1949a75ab7a724d9d6e0f80792d31e8398ea9f752a19ea24ca7c24a18-merged.mount: Deactivated successfully.
Dec 13 07:15:00 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:15:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 07:15:00 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:15:00 compute-0 elastic_lederberg[95006]: 
Dec 13 07:15:00 compute-0 elastic_lederberg[95006]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec 13 07:15:00 compute-0 systemd[1]: libpod-cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19.scope: Deactivated successfully.
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.622504231 +0000 UTC m=+0.430543653 container died cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44f586ba1bc7c23302f55082e53b8073af43974baea8b816d7878fbb7d98c94-merged.mount: Deactivated successfully.
Dec 13 07:15:00 compute-0 podman[94985]: 2025-12-13 07:15:00.647245643 +0000 UTC m=+0.455285066 container remove cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19 (image=quay.io/ceph/ceph:v20, name=elastic_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 07:15:00 compute-0 systemd[1]: libpod-conmon-cc29cf4abdfb216040c9c11e5f2feb9647618a244db1e2290f76b6d1b43b5e19.scope: Deactivated successfully.
Dec 13 07:15:00 compute-0 sudo[94973]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:00 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.679372846 +0000 UTC m=+0.031627185 container create 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 07:15:00 compute-0 systemd[1]: Started libpod-conmon-2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5.scope.
Dec 13 07:15:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 13 07:15:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 07:15:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 13 07:15:00 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 13 07:15:00 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 38 pg[9.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.737232493 +0000 UTC m=+0.089486843 container init 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.742535812 +0000 UTC m=+0.094790141 container start 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.743925704 +0000 UTC m=+0.096180033 container attach 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:00 compute-0 determined_sanderson[95126]: 167 167
Dec 13 07:15:00 compute-0 systemd[1]: libpod-2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5.scope: Deactivated successfully.
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.74615859 +0000 UTC m=+0.098412919 container died 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3a2571871d8bf96b6fba481ef06417170af90b23fd81f1d5a1a70abf5772651-merged.mount: Deactivated successfully.
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.666739612 +0000 UTC m=+0.018993961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:00 compute-0 podman[95107]: 2025-12-13 07:15:00.767318992 +0000 UTC m=+0.119573321 container remove 2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:00 compute-0 systemd[1]: libpod-conmon-2628a455774141af8be81e344af0d9e7e19e7e9df2183edf11c9f48d5a6fa5b5.scope: Deactivated successfully.
Dec 13 07:15:00 compute-0 podman[95149]: 2025-12-13 07:15:00.882394909 +0000 UTC m=+0.029400900 container create 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 07:15:00 compute-0 systemd[1]: Started libpod-conmon-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope.
Dec 13 07:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:00 compute-0 podman[95149]: 2025-12-13 07:15:00.950815256 +0000 UTC m=+0.097821247 container init 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 07:15:00 compute-0 podman[95149]: 2025-12-13 07:15:00.955472639 +0000 UTC m=+0.102478620 container start 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:00 compute-0 podman[95149]: 2025-12-13 07:15:00.956692192 +0000 UTC m=+0.103698183 container attach 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 07:15:00 compute-0 podman[95149]: 2025-12-13 07:15:00.869581977 +0000 UTC m=+0.016587978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:01 compute-0 epic_jemison[95162]: {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:     "0": [
Dec 13 07:15:01 compute-0 epic_jemison[95162]:         {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "devices": [
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "/dev/loop3"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             ],
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_name": "ceph_lv0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_size": "21470642176",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "name": "ceph_lv0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "tags": {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cluster_name": "ceph",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.crush_device_class": "",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.encrypted": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.objectstore": "bluestore",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osd_id": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.type": "block",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.vdo": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.with_tpm": "0"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             },
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "type": "block",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "vg_name": "ceph_vg0"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:         }
Dec 13 07:15:01 compute-0 epic_jemison[95162]:     ],
Dec 13 07:15:01 compute-0 epic_jemison[95162]:     "1": [
Dec 13 07:15:01 compute-0 epic_jemison[95162]:         {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "devices": [
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "/dev/loop4"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             ],
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_name": "ceph_lv1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_size": "21470642176",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "name": "ceph_lv1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "tags": {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cluster_name": "ceph",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.crush_device_class": "",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.encrypted": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.objectstore": "bluestore",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osd_id": "1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.type": "block",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.vdo": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.with_tpm": "0"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             },
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "type": "block",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "vg_name": "ceph_vg1"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:         }
Dec 13 07:15:01 compute-0 epic_jemison[95162]:     ],
Dec 13 07:15:01 compute-0 epic_jemison[95162]:     "2": [
Dec 13 07:15:01 compute-0 epic_jemison[95162]:         {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "devices": [
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "/dev/loop5"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             ],
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_name": "ceph_lv2",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_size": "21470642176",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "name": "ceph_lv2",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "tags": {
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.cluster_name": "ceph",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.crush_device_class": "",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.encrypted": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.objectstore": "bluestore",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osd_id": "2",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.type": "block",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.vdo": "0",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:                 "ceph.with_tpm": "0"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             },
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "type": "block",
Dec 13 07:15:01 compute-0 epic_jemison[95162]:             "vg_name": "ceph_vg2"
Dec 13 07:15:01 compute-0 epic_jemison[95162]:         }
Dec 13 07:15:01 compute-0 epic_jemison[95162]:     ]
Dec 13 07:15:01 compute-0 epic_jemison[95162]: }
Dec 13 07:15:01 compute-0 systemd[1]: libpod-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope: Deactivated successfully.
Dec 13 07:15:01 compute-0 conmon[95162]: conmon 968d2d5ad450dd8b3da2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope/container/memory.events
Dec 13 07:15:01 compute-0 podman[95149]: 2025-12-13 07:15:01.214712734 +0000 UTC m=+0.361718715 container died 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 07:15:01 compute-0 ceph-mon[74928]: pgmap v69: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:01 compute-0 ceph-mon[74928]: mds.? [v2:192.168.122.100:6814/3518175939,v1:192.168.122.100:6815/3518175939] up:active
Dec 13 07:15:01 compute-0 ceph-mon[74928]: fsmap cephfs:1 {0=cephfs.compute-0.zwnyoz=up:active}
Dec 13 07:15:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:15:01 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 07:15:01 compute-0 ceph-mon[74928]: osdmap e38: 3 total, 3 up, 3 in
Dec 13 07:15:01 compute-0 podman[95149]: 2025-12-13 07:15:01.23786545 +0000 UTC m=+0.384871431 container remove 968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:01 compute-0 systemd[1]: libpod-conmon-968d2d5ad450dd8b3da2fb3bc46181b5b735fcee29b10ad271709823a14f2ec3.scope: Deactivated successfully.
Dec 13 07:15:01 compute-0 sudo[95064]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:01 compute-0 sudo[95206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfrfzecwlchqlmzdaathhwrovyifxalb ; /usr/bin/python3'
Dec 13 07:15:01 compute-0 sudo[95206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:01 compute-0 sudo[95205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:15:01 compute-0 sudo[95205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:01 compute-0 sudo[95205]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:01 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 13 07:15:01 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 13 07:15:01 compute-0 sudo[95233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:15:01 compute-0 sudo[95233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 13 07:15:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 13 07:15:01 compute-0 python3[95225]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2546699b44423471dabb82bda6c93d714d0739a991974b6cefa963ee20ec328f-merged.mount: Deactivated successfully.
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.459113879 +0000 UTC m=+0.030331860 container create a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:15:01 compute-0 systemd[1]: Started libpod-conmon-a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191.scope.
Dec 13 07:15:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b5df26e9aa3b239147935e011713e28d91f577546bb955c8dcae275beab691/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b5df26e9aa3b239147935e011713e28d91f577546bb955c8dcae275beab691/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.513579468 +0000 UTC m=+0.084797459 container init a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.519220521 +0000 UTC m=+0.090438492 container start a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.520575928 +0000 UTC m=+0.091793899 container attach a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.44871206 +0000 UTC m=+0.019930050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.581692387 +0000 UTC m=+0.025996892 container create 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:15:01 compute-0 systemd[1]: Started libpod-conmon-1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a.scope.
Dec 13 07:15:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.630201952 +0000 UTC m=+0.074506467 container init 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.634195257 +0000 UTC m=+0.078499753 container start 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 07:15:01 compute-0 suspicious_hugle[95310]: 167 167
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.636932272 +0000 UTC m=+0.081236767 container attach 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:15:01 compute-0 systemd[1]: libpod-1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a.scope: Deactivated successfully.
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.637755068 +0000 UTC m=+0.082059563 container died 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6708cd94a186475e5538ed68b23fb912e3cc149ae5c8a3762a148e32a29233-merged.mount: Deactivated successfully.
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.656791517 +0000 UTC m=+0.101096012 container remove 1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hugle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:15:01 compute-0 podman[95284]: 2025-12-13 07:15:01.571391627 +0000 UTC m=+0.015696143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:01 compute-0 systemd[1]: libpod-conmon-1cdad2d7fbd7f8daf7b1d3e3c6f1961679d704b8c9ee6930fdcc10447ead451a.scope: Deactivated successfully.
Dec 13 07:15:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 13 07:15:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 13 07:15:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 13 07:15:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 13 07:15:01 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 07:15:01 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:01 compute-0 podman[95337]: 2025-12-13 07:15:01.768772988 +0000 UTC m=+0.026499006 container create 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:15:01 compute-0 systemd[1]: Started libpod-conmon-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope.
Dec 13 07:15:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:01 compute-0 podman[95337]: 2025-12-13 07:15:01.820115246 +0000 UTC m=+0.077841264 container init 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:01 compute-0 podman[95337]: 2025-12-13 07:15:01.824578185 +0000 UTC m=+0.082304202 container start 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:01 compute-0 podman[95337]: 2025-12-13 07:15:01.830930484 +0000 UTC m=+0.088656502 container attach 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:01 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:15:01 compute-0 naughty_shannon[95270]: 
Dec 13 07:15:01 compute-0 naughty_shannon[95270]: [{"container_id": "8e6a4f61ea03", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2025-12-13T07:13:55.466128Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-13T07:13:55.505998Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309109Z", "memory_usage": 7803502, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-13T07:13:55.400266Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@crash.compute-0", "version": "20.2.0"}, {"container_id": "c65ab07d188f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.68%", "created": "2025-12-13T07:14:58.166394Z", "daemon_id": "cephfs.compute-0.zwnyoz", "daemon_name": "mds.cephfs.compute-0.zwnyoz", "daemon_type": "mds", "events": ["2025-12-13T07:14:58.207332Z daemon:mds.cephfs.compute-0.zwnyoz [INFO] \"Deployed mds.cephfs.compute-0.zwnyoz on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309475Z", "memory_usage": 12645826, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-13T07:14:58.094347Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mds.cephfs.compute-0.zwnyoz", "version": "20.2.0"}, {"container_id": "4d78867918d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.35%", "created": "2025-12-13T07:13:23.817984Z", "daemon_id": "compute-0.qsherl", "daemon_name": "mgr.compute-0.qsherl", "daemon_type": "mgr", "events": ["2025-12-13T07:13:58.693619Z daemon:mgr.compute-0.qsherl [INFO] \"Reconfigured mgr.compute-0.qsherl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309038Z", "memory_usage": 546727526, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-13T07:13:23.760333Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mgr.compute-0.qsherl", "version": "20.2.0"}, {"container_id": "4656a144eefb", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.12%", "created": "2025-12-13T07:13:21.270972Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-13T07:13:58.201111Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.308939Z", "memory_request": 2147483648, "memory_usage": 40160460, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-13T07:13:22.585200Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@mon.compute-0", "version": "20.2.0"}, {"container_id": "5e169e1385f9", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.28%", "created": "2025-12-13T07:14:12.157205Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-13T07:14:12.197502Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309175Z", "memory_request": 4294967296, "memory_usage": 68105011, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:14:12.086798Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@osd.0", "version": "20.2.0"}, {"container_id": "c0e0c03f97b0", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.38%", "created": "2025-12-13T07:14:15.231219Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-13T07:14:15.322829Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309240Z", "memory_request": 4294967296, "memory_usage": 68105011, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:14:15.052351Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@osd.1", "version": "20.2.0"}, {"container_id": "bb7cd2f636f6", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.47%", "created": "2025-12-13T07:14:18.155232Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-13T07:14:18.245140Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309305Z", "memory_request": 4294967296, "memory_usage": 69688360, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:14:17.988018Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@osd.2", "version": "20.2.0"}, {"container_id": "69ac193e949f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.17%", "created": "2025-12-13T07:14:56.869389Z", "daemon_id": "rgw.compute-0.kikquh", "daemon_name": "rgw.rgw.compute-0.kikquh", "daemon_type": "rgw", "events": ["2025-12-13T07:14:56.914374Z daemon:rgw.rgw.compute-0.kikquh [INFO] \"Deployed rgw.rgw.compute-0.kikquh on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-12-13T07:14:59.309370Z", "memory_usage": 54578380, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-12-13T07:14:56.803998Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de@rgw.rgw.compute-0.kikquh", "version": "20.2.0"}]
Dec 13 07:15:01 compute-0 podman[95337]: 2025-12-13 07:15:01.757767953 +0000 UTC m=+0.015493991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:01 compute-0 systemd[1]: libpod-a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191.scope: Deactivated successfully.
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.858482728 +0000 UTC m=+0.429700699 container died a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:01 compute-0 podman[95258]: 2025-12-13 07:15:01.877781061 +0000 UTC m=+0.448999031 container remove a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191 (image=quay.io/ceph/ceph:v20, name=naughty_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 07:15:01 compute-0 sudo[95206]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:01 compute-0 systemd[1]: libpod-conmon-a5268b65ec7bf37d287a2ea2bfe54a05288ea687e5831d77e89536be0cc35191.scope: Deactivated successfully.
Dec 13 07:15:02 compute-0 ansible-async_wrapper.py[93568]: Done in kid B.
Dec 13 07:15:02 compute-0 rsyslogd[962]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "8e6a4f61ea03", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 13 07:15:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v72: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 6 op/s
Dec 13 07:15:02 compute-0 ceph-mon[74928]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:15:02 compute-0 ceph-mon[74928]: 5.10 scrub starts
Dec 13 07:15:02 compute-0 ceph-mon[74928]: 5.10 scrub ok
Dec 13 07:15:02 compute-0 ceph-mon[74928]: 4.17 scrub starts
Dec 13 07:15:02 compute-0 ceph-mon[74928]: 4.17 scrub ok
Dec 13 07:15:02 compute-0 ceph-mon[74928]: osdmap e39: 3 total, 3 up, 3 in
Dec 13 07:15:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 07:15:02 compute-0 lvm[95441]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:15:02 compute-0 lvm[95440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:15:02 compute-0 lvm[95440]: VG ceph_vg0 finished
Dec 13 07:15:02 compute-0 lvm[95441]: VG ceph_vg1 finished
Dec 13 07:15:02 compute-0 lvm[95444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:15:02 compute-0 lvm[95444]: VG ceph_vg2 finished
Dec 13 07:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b5df26e9aa3b239147935e011713e28d91f577546bb955c8dcae275beab691-merged.mount: Deactivated successfully.
Dec 13 07:15:02 compute-0 condescending_albattani[95351]: {}
Dec 13 07:15:02 compute-0 systemd[1]: libpod-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope: Deactivated successfully.
Dec 13 07:15:02 compute-0 systemd[1]: libpod-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope: Consumed 1.007s CPU time.
Dec 13 07:15:02 compute-0 podman[95337]: 2025-12-13 07:15:02.479921771 +0000 UTC m=+0.737647809 container died 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e474fa75a8a94b5f75cea44792aabd3f4465a74671c8d1f0fece73a598193ba4-merged.mount: Deactivated successfully.
Dec 13 07:15:02 compute-0 podman[95337]: 2025-12-13 07:15:02.502679224 +0000 UTC m=+0.760405243 container remove 495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:02 compute-0 systemd[1]: libpod-conmon-495fad931ae444e6135750b1615a413428edee74a92468cb3b9bdef2003a0cbe.scope: Deactivated successfully.
Dec 13 07:15:02 compute-0 sudo[95233]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:15:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:15:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:02 compute-0 sudo[95480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryadruuqezlxcnfcoxmavlrbemdwmkms ; /usr/bin/python3'
Dec 13 07:15:02 compute-0 sudo[95480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:02 compute-0 sudo[95481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:15:02 compute-0 sudo[95481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:02 compute-0 sudo[95481]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:15:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:15:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:02 compute-0 sudo[95508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:15:02 compute-0 sudo[95508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:02 compute-0 sudo[95508]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:02 compute-0 sudo[95533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:15:02 compute-0 python3[95494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:02 compute-0 sudo[95533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 13 07:15:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 07:15:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 13 07:15:02 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 13 07:15:02 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 40 pg[10.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:02 compute-0 podman[95557]: 2025-12-13 07:15:02.751809558 +0000 UTC m=+0.052679632 container create 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 07:15:02 compute-0 systemd[1]: Started libpod-conmon-1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c.scope.
Dec 13 07:15:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d069fced84f6c1079ac6088d4302a6c53f633f3e354948f543bfdd122f73bbf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d069fced84f6c1079ac6088d4302a6c53f633f3e354948f543bfdd122f73bbf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:02 compute-0 podman[95557]: 2025-12-13 07:15:02.809222225 +0000 UTC m=+0.110092300 container init 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 07:15:02 compute-0 podman[95557]: 2025-12-13 07:15:02.720344018 +0000 UTC m=+0.021214092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:02 compute-0 podman[95557]: 2025-12-13 07:15:02.814585185 +0000 UTC m=+0.115455259 container start 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:02 compute-0 podman[95557]: 2025-12-13 07:15:02.823495642 +0000 UTC m=+0.124365736 container attach 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:15:03 compute-0 podman[95631]: 2025-12-13 07:15:03.04263049 +0000 UTC m=+0.037878725 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:15:03 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 13 07:15:03 compute-0 podman[95631]: 2025-12-13 07:15:03.124852036 +0000 UTC m=+0.120100271 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 07:15:03 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572883331' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:15:03 compute-0 busy_johnson[95571]: 
Dec 13 07:15:03 compute-0 busy_johnson[95571]: {"fsid":"00fdae1b-7fad-5f1b-8734-ba4d9298a6de","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":100,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1765610062,"num_in_osds":3,"osd_in_since":1765610047,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":196,"num_pools":10,"num_objects":13,"data_bytes":461030,"bytes_used":84062208,"bytes_avail":64327864320,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051020407117903233,"inactive_pgs_ratio":0.0051020407117903233,"read_bytes_sec":1279,"write_bytes_sec":2047,"read_op_per_sec":0,"write_op_per_sec":5},"fsmap":{"epoch":5,"btime":"2025-12-13T07:15:00:227157+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.zwnyoz","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T07:14:40.194918+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 13 07:15:03 compute-0 systemd[1]: libpod-1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c.scope: Deactivated successfully.
Dec 13 07:15:03 compute-0 podman[95557]: 2025-12-13 07:15:03.216188855 +0000 UTC m=+0.517058949 container died 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d069fced84f6c1079ac6088d4302a6c53f633f3e354948f543bfdd122f73bbf-merged.mount: Deactivated successfully.
Dec 13 07:15:03 compute-0 podman[95557]: 2025-12-13 07:15:03.244457316 +0000 UTC m=+0.545327391 container remove 1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c (image=quay.io/ceph/ceph:v20, name=busy_johnson, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:03 compute-0 systemd[1]: libpod-conmon-1c75db0d9cd6774bc46fdd886fb79c6e4ce21c2c6a6dcf1d50f39727e4926d1c.scope: Deactivated successfully.
Dec 13 07:15:03 compute-0 sudo[95480]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 07:15:03 compute-0 ceph-mon[74928]: pgmap v72: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 6 op/s
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 07:15:03 compute-0 ceph-mon[74928]: osdmap e40: 3 total, 3 up, 3 in
Dec 13 07:15:03 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2572883331' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 07:15:03 compute-0 sudo[95533]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:15:03 compute-0 sudo[95790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:15:03 compute-0 sudo[95790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:03 compute-0 sudo[95790]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 13 07:15:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 13 07:15:03 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 07:15:03 compute-0 sudo[95815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:15:03 compute-0 sudo[95815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:03 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:03 compute-0 sudo[95863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnthwmhweaoqkdikqljpqxbuuynovdyg ; /usr/bin/python3'
Dec 13 07:15:03 compute-0 sudo[95863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:03 compute-0 podman[95876]: 2025-12-13 07:15:03.964759142 +0000 UTC m=+0.028617199 container create eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:15:03 compute-0 systemd[1]: Started libpod-conmon-eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5.scope.
Dec 13 07:15:03 compute-0 python3[95865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:04 compute-0 podman[95876]: 2025-12-13 07:15:04.022938572 +0000 UTC m=+0.086796648 container init eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 07:15:04 compute-0 podman[95876]: 2025-12-13 07:15:04.02788602 +0000 UTC m=+0.091744086 container start eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 07:15:04 compute-0 podman[95876]: 2025-12-13 07:15:04.02890732 +0000 UTC m=+0.092765386 container attach eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:04 compute-0 strange_newton[95889]: 167 167
Dec 13 07:15:04 compute-0 systemd[1]: libpod-eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5.scope: Deactivated successfully.
Dec 13 07:15:04 compute-0 podman[95876]: 2025-12-13 07:15:04.030702203 +0000 UTC m=+0.094560270 container died eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.038157666 +0000 UTC m=+0.030654917 container create 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aefdd941f55b5a940c9715f5bbe1b1893aa7581f7fe0c4e32163b40831174da-merged.mount: Deactivated successfully.
Dec 13 07:15:04 compute-0 podman[95876]: 2025-12-13 07:15:03.952648721 +0000 UTC m=+0.016506807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:04 compute-0 podman[95876]: 2025-12-13 07:15:04.055718271 +0000 UTC m=+0.119576337 container remove eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_newton, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:15:04 compute-0 systemd[1]: Started libpod-conmon-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope.
Dec 13 07:15:04 compute-0 systemd[1]: libpod-conmon-eacf135a8b57c2ee01f3c02f54d01f57cfd036b1dfd9680ece654170d75766f5.scope: Deactivated successfully.
Dec 13 07:15:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ce6819b805382e51e1a277b98a340dce2bcd70050c824108be109229b7686/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ce6819b805382e51e1a277b98a340dce2bcd70050c824108be109229b7686/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.093228121 +0000 UTC m=+0.085725382 container init 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.097241755 +0000 UTC m=+0.089739007 container start 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.098479362 +0000 UTC m=+0.090976613 container attach 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.025796533 +0000 UTC m=+0.018293804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.178857524 +0000 UTC m=+0.029008783 container create b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:15:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v75: 197 pgs: 1 unknown, 1 creating+peering, 195 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 13 07:15:04 compute-0 systemd[1]: Started libpod-conmon-b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a.scope.
Dec 13 07:15:04 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mds-cephfs-compute-0-zwnyoz[93860]: 2025-12-13T07:15:04.220+0000 7fe604a7f640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 07:15:04 compute-0 ceph-mds[93864]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 07:15:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.255499753 +0000 UTC m=+0.105651013 container init b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.260626269 +0000 UTC m=+0.110777529 container start b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.261888321 +0000 UTC m=+0.112039581 container attach b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.166627457 +0000 UTC m=+0.016778737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 07:15:04 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886617627' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:15:04 compute-0 frosty_mendeleev[95915]: 
Dec 13 07:15:04 compute-0 systemd[1]: libpod-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope: Deactivated successfully.
Dec 13 07:15:04 compute-0 conmon[95915]: conmon 2738ed6eece7fcff7791 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope/container/memory.events
Dec 13 07:15:04 compute-0 frosty_mendeleev[95915]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.kikquh","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.42728752 +0000 UTC m=+0.419784771 container died 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8ce6819b805382e51e1a277b98a340dce2bcd70050c824108be109229b7686-merged.mount: Deactivated successfully.
Dec 13 07:15:04 compute-0 podman[95892]: 2025-12-13 07:15:04.45974833 +0000 UTC m=+0.452245582 container remove 2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685 (image=quay.io/ceph/ceph:v20, name=frosty_mendeleev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 07:15:04 compute-0 systemd[1]: libpod-conmon-2738ed6eece7fcff7791bba57d43b3e1eedbc0a0fa35f3859cd1105ba3c0c685.scope: Deactivated successfully.
Dec 13 07:15:04 compute-0 sudo[95863]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:04 compute-0 beautiful_euclid[95958]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:15:04 compute-0 beautiful_euclid[95958]: --> All data devices are unavailable
Dec 13 07:15:04 compute-0 systemd[1]: libpod-b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a.scope: Deactivated successfully.
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.633329911 +0000 UTC m=+0.483481170 container died b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:15:04 compute-0 ceph-mon[74928]: 7.12 scrub starts
Dec 13 07:15:04 compute-0 ceph-mon[74928]: 7.12 scrub ok
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:15:04 compute-0 ceph-mon[74928]: osdmap e41: 3 total, 3 up, 3 in
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 07:15:04 compute-0 ceph-mon[74928]: pgmap v75: 197 pgs: 1 unknown, 1 creating+peering, 195 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 13 07:15:04 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3886617627' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 07:15:04 compute-0 podman[95926]: 2025-12-13 07:15:04.652070073 +0000 UTC m=+0.502221333 container remove b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:04 compute-0 systemd[1]: libpod-conmon-b4de93d0ecb6ab5f9aba5ed10ade272a03d3a85052deb5266728c21c68f0833a.scope: Deactivated successfully.
Dec 13 07:15:04 compute-0 sudo[95815]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:04 compute-0 sudo[96000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:15:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 13 07:15:04 compute-0 sudo[96000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 07:15:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 13 07:15:04 compute-0 sudo[96000]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:04 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 13 07:15:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 13 07:15:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 07:15:04 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 42 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:04 compute-0 sudo[96025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:15:04 compute-0 sudo[96025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0905c8791ea007d786111054a7d97e3a47936d384a94cc447123b3d51ad6528d-merged.mount: Deactivated successfully.
Dec 13 07:15:04 compute-0 podman[96060]: 2025-12-13 07:15:04.986550553 +0000 UTC m=+0.025474029 container create 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:05 compute-0 systemd[1]: Started libpod-conmon-3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71.scope.
Dec 13 07:15:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:05 compute-0 podman[96060]: 2025-12-13 07:15:05.02654849 +0000 UTC m=+0.065471987 container init 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:15:05 compute-0 podman[96060]: 2025-12-13 07:15:05.030557515 +0000 UTC m=+0.069480992 container start 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:15:05 compute-0 podman[96060]: 2025-12-13 07:15:05.031621404 +0000 UTC m=+0.070544901 container attach 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:05 compute-0 fervent_carver[96073]: 167 167
Dec 13 07:15:05 compute-0 systemd[1]: libpod-3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 podman[96060]: 2025-12-13 07:15:05.033680444 +0000 UTC m=+0.072603931 container died 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e686fe38216b82c70785c5db1703dae835acef3fb64c20aa8a238a63a95b712e-merged.mount: Deactivated successfully.
Dec 13 07:15:05 compute-0 podman[96060]: 2025-12-13 07:15:05.055111534 +0000 UTC m=+0.094035011 container remove 3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:05 compute-0 podman[96060]: 2025-12-13 07:15:04.976760412 +0000 UTC m=+0.015683909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:05 compute-0 systemd[1]: libpod-conmon-3319bbd468d7b54d327c59627e42872d255ba41a62d3c46a5dcefe1aebaaca71.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 sudo[96112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbxtcrmaegelrqnscsnuwkkbposexaj ; /usr/bin/python3'
Dec 13 07:15:05 compute-0 sudo[96112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.16799482 +0000 UTC m=+0.027879270 container create 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:15:05 compute-0 systemd[1]: Started libpod-conmon-31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77.scope.
Dec 13 07:15:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:05 compute-0 python3[96114]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.224938546 +0000 UTC m=+0.084822996 container init 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.230485732 +0000 UTC m=+0.090370182 container start 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.232477004 +0000 UTC m=+0.092361475 container attach 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.251755189 +0000 UTC m=+0.025031779 container create 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.156984746 +0000 UTC m=+0.016869216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:05 compute-0 systemd[1]: Started libpod-conmon-79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e.scope.
Dec 13 07:15:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900766f18bd2f523432f604cd0e3f23922c067488872e1cf847bd6d6f8bd8479/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/900766f18bd2f523432f604cd0e3f23922c067488872e1cf847bd6d6f8bd8479/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.294451166 +0000 UTC m=+0.067727756 container init 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.298801742 +0000 UTC m=+0.072078323 container start 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.299953728 +0000 UTC m=+0.073230319 container attach 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.242206982 +0000 UTC m=+0.015483592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:05 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 13 07:15:05 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 13 07:15:05 compute-0 reverent_knuth[96133]: {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:     "0": [
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:         {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "devices": [
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "/dev/loop3"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             ],
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_name": "ceph_lv0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_size": "21470642176",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "name": "ceph_lv0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "tags": {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cluster_name": "ceph",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.crush_device_class": "",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.encrypted": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.objectstore": "bluestore",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osd_id": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.type": "block",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.vdo": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.with_tpm": "0"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             },
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "type": "block",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "vg_name": "ceph_vg0"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:         }
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:     ],
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:     "1": [
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:         {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "devices": [
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "/dev/loop4"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             ],
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_name": "ceph_lv1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_size": "21470642176",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "name": "ceph_lv1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "tags": {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cluster_name": "ceph",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.crush_device_class": "",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.encrypted": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.objectstore": "bluestore",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osd_id": "1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.type": "block",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.vdo": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.with_tpm": "0"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             },
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "type": "block",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "vg_name": "ceph_vg1"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:         }
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:     ],
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:     "2": [
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:         {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "devices": [
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "/dev/loop5"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             ],
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_name": "ceph_lv2",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_size": "21470642176",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "name": "ceph_lv2",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "tags": {
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.cluster_name": "ceph",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.crush_device_class": "",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.encrypted": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.objectstore": "bluestore",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osd_id": "2",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.type": "block",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.vdo": "0",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:                 "ceph.with_tpm": "0"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             },
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "type": "block",
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:             "vg_name": "ceph_vg2"
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:         }
Dec 13 07:15:05 compute-0 reverent_knuth[96133]:     ]
Dec 13 07:15:05 compute-0 reverent_knuth[96133]: }
Dec 13 07:15:05 compute-0 systemd[1]: libpod-31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.482132656 +0000 UTC m=+0.342017107 container died 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:05 compute-0 podman[96120]: 2025-12-13 07:15:05.500350096 +0000 UTC m=+0.360234536 container remove 31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_knuth, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:15:05 compute-0 sudo[96025]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:05 compute-0 systemd[1]: libpod-conmon-31d05ac262c499f24bd2bf32bfccd96f11a33c0bba67502632d10bbc87ccdd77.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 sudo[96187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:15:05 compute-0 sudo[96187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:05 compute-0 sudo[96187]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 13 07:15:05 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439360568' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 13 07:15:05 compute-0 tender_ptolemy[96150]: mimic
Dec 13 07:15:05 compute-0 sudo[96212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:15:05 compute-0 sudo[96212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:05 compute-0 systemd[1]: libpod-79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.636176513 +0000 UTC m=+0.409453113 container died 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 07:15:05 compute-0 podman[96136]: 2025-12-13 07:15:05.657128422 +0000 UTC m=+0.430405012 container remove 79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:05 compute-0 systemd[1]: libpod-conmon-79648eee50b5e3f79a91ec962890475807809921981eb35d629fa4b775bc124e.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 sudo[96112]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 13 07:15:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 07:15:05 compute-0 ceph-mon[74928]: osdmap e42: 3 total, 3 up, 3 in
Dec 13 07:15:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 07:15:05 compute-0 ceph-mon[74928]: 4.16 scrub starts
Dec 13 07:15:05 compute-0 ceph-mon[74928]: 4.16 scrub ok
Dec 13 07:15:05 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3439360568' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 13 07:15:05 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 07:15:05 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 13 07:15:05 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 13 07:15:05 compute-0 radosgw[93487]: v1 topic migration: starting v1 topic migration..
Dec 13 07:15:05 compute-0 radosgw[93487]: v1 topic migration: finished v1 topic migration
Dec 13 07:15:05 compute-0 radosgw[93487]: framework: beast
Dec 13 07:15:05 compute-0 radosgw[93487]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 13 07:15:05 compute-0 radosgw[93487]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 13 07:15:05 compute-0 radosgw[93487]: starting handler: beast
Dec 13 07:15:05 compute-0 radosgw[93487]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.884848646 +0000 UTC m=+0.029815570 container create fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:15:05 compute-0 radosgw[93487]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.kikquh,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=c41c06c0-96f4-44f4-8e75-f5ee0f887dbd,zone_name=default,zonegroup_id=3619564b-3f09-447d-be0a-4c55dcbaaf7a,zonegroup_name=default}
Dec 13 07:15:05 compute-0 systemd[1]: Started libpod-conmon-fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1.scope.
Dec 13 07:15:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.938833931 +0000 UTC m=+0.083800846 container init fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.943744251 +0000 UTC m=+0.088711165 container start fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.94548906 +0000 UTC m=+0.090455994 container attach fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:05 compute-0 condescending_rosalind[96307]: 167 167
Dec 13 07:15:05 compute-0 systemd[1]: libpod-fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1.scope: Deactivated successfully.
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.947391787 +0000 UTC m=+0.092358701 container died fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.964180291 +0000 UTC m=+0.109147205 container remove fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:15:05 compute-0 podman[96292]: 2025-12-13 07:15:05.871913004 +0000 UTC m=+0.016879937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-900766f18bd2f523432f604cd0e3f23922c067488872e1cf847bd6d6f8bd8479-merged.mount: Deactivated successfully.
Dec 13 07:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7190b0a9c2d5403aff95da9f6688be354bcadce99ba4e670afaf3422d4bc342-merged.mount: Deactivated successfully.
Dec 13 07:15:05 compute-0 systemd[1]: libpod-conmon-fe578b8c1d8aabe387c4b9c0919d85cab96ec4d22b207c2ed5f99aa524d3d9f1.scope: Deactivated successfully.
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.080929121 +0000 UTC m=+0.027185306 container create 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:15:06 compute-0 systemd[1]: Started libpod-conmon-3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d.scope.
Dec 13 07:15:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.147989232 +0000 UTC m=+0.094245416 container init 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:06 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.153314921 +0000 UTC m=+0.099571105 container start 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.154341392 +0000 UTC m=+0.100597575 container attach 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 07:15:06 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.070195187 +0000 UTC m=+0.016451391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:15:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v78: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 9 op/s
Dec 13 07:15:06 compute-0 sudo[96371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbatiatszbcondzbktcvziqdpsbpnkld ; /usr/bin/python3'
Dec 13 07:15:06 compute-0 sudo[96371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:06 compute-0 python3[96373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:06 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 13 07:15:06 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 13 07:15:06 compute-0 podman[96387]: 2025-12-13 07:15:06.469578644 +0000 UTC m=+0.055579634 container create aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:06 compute-0 systemd[1]: Started libpod-conmon-aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04.scope.
Dec 13 07:15:06 compute-0 podman[96387]: 2025-12-13 07:15:06.444013533 +0000 UTC m=+0.030014543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0011bc1cb1c9b79bd71167ca41677cea2c9e2b93969dc7cef470a6f88b7e09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0011bc1cb1c9b79bd71167ca41677cea2c9e2b93969dc7cef470a6f88b7e09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:06 compute-0 podman[96387]: 2025-12-13 07:15:06.551657141 +0000 UTC m=+0.137658152 container init aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 07:15:06 compute-0 podman[96387]: 2025-12-13 07:15:06.556491737 +0000 UTC m=+0.142492728 container start aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 07:15:06 compute-0 podman[96387]: 2025-12-13 07:15:06.557528917 +0000 UTC m=+0.143529927 container attach aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:06 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1438695267' entity='client.rgw.rgw.compute-0.kikquh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 07:15:06 compute-0 ceph-mon[74928]: osdmap e43: 3 total, 3 up, 3 in
Dec 13 07:15:06 compute-0 ceph-mon[74928]: 3.14 scrub starts
Dec 13 07:15:06 compute-0 ceph-mon[74928]: 3.14 scrub ok
Dec 13 07:15:06 compute-0 ceph-mon[74928]: pgmap v78: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 9 op/s
Dec 13 07:15:06 compute-0 ceph-mon[74928]: 4.15 scrub starts
Dec 13 07:15:06 compute-0 ceph-mon[74928]: 4.15 scrub ok
Dec 13 07:15:06 compute-0 lvm[96482]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:15:06 compute-0 lvm[96482]: VG ceph_vg1 finished
Dec 13 07:15:06 compute-0 lvm[96481]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:15:06 compute-0 lvm[96481]: VG ceph_vg0 finished
Dec 13 07:15:06 compute-0 lvm[96485]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:15:06 compute-0 lvm[96485]: VG ceph_vg2 finished
Dec 13 07:15:06 compute-0 focused_jackson[96343]: {}
Dec 13 07:15:06 compute-0 systemd[1]: libpod-3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d.scope: Deactivated successfully.
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.780117395 +0000 UTC m=+0.726373579 container died 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-938aeea6205abe4969fd9a285121a40421e002a6826158c5445d46885c36da2d-merged.mount: Deactivated successfully.
Dec 13 07:15:06 compute-0 podman[96329]: 2025-12-13 07:15:06.802927538 +0000 UTC m=+0.749183721 container remove 3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:06 compute-0 systemd[1]: libpod-conmon-3d288f3e0a36780861944b5c72e10cbdc1fbcd0bddca4bf538c3bfe553db005d.scope: Deactivated successfully.
Dec 13 07:15:06 compute-0 sudo[96212]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:15:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:15:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:06 compute-0 sudo[96496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:15:06 compute-0 sudo[96496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:15:06 compute-0 sudo[96496]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 13 07:15:06 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763273209' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 13 07:15:06 compute-0 zealous_benz[96431]: 
Dec 13 07:15:06 compute-0 zealous_benz[96431]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec 13 07:15:06 compute-0 systemd[1]: libpod-aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04.scope: Deactivated successfully.
Dec 13 07:15:06 compute-0 podman[96387]: 2025-12-13 07:15:06.982469539 +0000 UTC m=+0.568470529 container died aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a0011bc1cb1c9b79bd71167ca41677cea2c9e2b93969dc7cef470a6f88b7e09-merged.mount: Deactivated successfully.
Dec 13 07:15:07 compute-0 podman[96387]: 2025-12-13 07:15:07.002423593 +0000 UTC m=+0.588424583 container remove aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04 (image=quay.io/ceph/ceph:v20, name=zealous_benz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:07 compute-0 systemd[1]: libpod-conmon-aa1848a2fb05384fcf987cc6606ea5f9c2521db6d7d3e25c376b3bb0b85c6f04.scope: Deactivated successfully.
Dec 13 07:15:07 compute-0 sudo[96371]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:07 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2763273209' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 13 07:15:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v79: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 186 B/s rd, 2.7 KiB/s wr, 4 op/s
Dec 13 07:15:08 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 13 07:15:08 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 13 07:15:08 compute-0 ceph-mon[74928]: pgmap v79: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 186 B/s rd, 2.7 KiB/s wr, 4 op/s
Dec 13 07:15:08 compute-0 ceph-mon[74928]: 2.14 scrub starts
Dec 13 07:15:08 compute-0 ceph-mon[74928]: 2.14 scrub ok
Dec 13 07:15:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:15:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:15:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:15:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:15:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:15:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:15:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v80: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 158 B/s rd, 316 B/s wr, 1 op/s
Dec 13 07:15:11 compute-0 ceph-mon[74928]: pgmap v80: 197 pgs: 197 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 158 B/s rd, 316 B/s wr, 1 op/s
Dec 13 07:15:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v81: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Dec 13 07:15:12 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 13 07:15:12 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 13 07:15:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:13 compute-0 ceph-mon[74928]: pgmap v81: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Dec 13 07:15:13 compute-0 ceph-mon[74928]: 2.12 scrub starts
Dec 13 07:15:13 compute-0 ceph-mon[74928]: 2.12 scrub ok
Dec 13 07:15:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v82: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 7.0 KiB/s wr, 150 op/s
Dec 13 07:15:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 13 07:15:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 13 07:15:15 compute-0 ceph-mon[74928]: pgmap v82: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 7.0 KiB/s wr, 150 op/s
Dec 13 07:15:15 compute-0 ceph-mon[74928]: 6.16 scrub starts
Dec 13 07:15:15 compute-0 ceph-mon[74928]: 6.16 scrub ok
Dec 13 07:15:15 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 13 07:15:15 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 13 07:15:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 13 07:15:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 13 07:15:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v83: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 6.1 KiB/s wr, 135 op/s
Dec 13 07:15:16 compute-0 ceph-mon[74928]: 2.10 scrub starts
Dec 13 07:15:16 compute-0 ceph-mon[74928]: 2.10 scrub ok
Dec 13 07:15:17 compute-0 ceph-mon[74928]: 7.10 scrub starts
Dec 13 07:15:17 compute-0 ceph-mon[74928]: 7.10 scrub ok
Dec 13 07:15:17 compute-0 ceph-mon[74928]: pgmap v83: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 6.1 KiB/s wr, 135 op/s
Dec 13 07:15:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 13 07:15:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 13 07:15:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:18 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 13 07:15:18 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 13 07:15:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v84: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 07:15:18 compute-0 ceph-mon[74928]: 5.17 scrub starts
Dec 13 07:15:18 compute-0 ceph-mon[74928]: 5.17 scrub ok
Dec 13 07:15:18 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 13 07:15:18 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 13 07:15:19 compute-0 ceph-mon[74928]: 3.13 scrub starts
Dec 13 07:15:19 compute-0 ceph-mon[74928]: 3.13 scrub ok
Dec 13 07:15:19 compute-0 ceph-mon[74928]: pgmap v84: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 07:15:19 compute-0 ceph-mon[74928]: 6.10 scrub starts
Dec 13 07:15:19 compute-0 ceph-mon[74928]: 6.10 scrub ok
Dec 13 07:15:19 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 13 07:15:19 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 13 07:15:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v85: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 07:15:20 compute-0 ceph-mon[74928]: 6.12 scrub starts
Dec 13 07:15:20 compute-0 ceph-mon[74928]: 6.12 scrub ok
Dec 13 07:15:20 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 13 07:15:20 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 13 07:15:21 compute-0 ceph-mon[74928]: pgmap v85: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 07:15:21 compute-0 ceph-mon[74928]: 4.c scrub starts
Dec 13 07:15:21 compute-0 ceph-mon[74928]: 4.c scrub ok
Dec 13 07:15:21 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 13 07:15:21 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 13 07:15:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v86: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 07:15:22 compute-0 ceph-mon[74928]: 4.0 scrub starts
Dec 13 07:15:22 compute-0 ceph-mon[74928]: 4.0 scrub ok
Dec 13 07:15:22 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 13 07:15:22 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 13 07:15:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:23 compute-0 ceph-mon[74928]: pgmap v86: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Dec 13 07:15:23 compute-0 ceph-mon[74928]: 5.8 scrub starts
Dec 13 07:15:23 compute-0 ceph-mon[74928]: 5.8 scrub ok
Dec 13 07:15:23 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 13 07:15:23 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 13 07:15:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v87: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:24 compute-0 ceph-mon[74928]: 6.0 scrub starts
Dec 13 07:15:24 compute-0 ceph-mon[74928]: 6.0 scrub ok
Dec 13 07:15:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 13 07:15:25 compute-0 ceph-mon[74928]: pgmap v87: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 13 07:15:26 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 13 07:15:26 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 13 07:15:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v88: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:26 compute-0 ceph-mon[74928]: 2.e scrub starts
Dec 13 07:15:26 compute-0 ceph-mon[74928]: 2.e scrub ok
Dec 13 07:15:27 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 13 07:15:27 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 13 07:15:27 compute-0 ceph-mon[74928]: 7.17 scrub starts
Dec 13 07:15:27 compute-0 ceph-mon[74928]: 7.17 scrub ok
Dec 13 07:15:27 compute-0 ceph-mon[74928]: pgmap v88: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v89: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:28 compute-0 ceph-mon[74928]: 7.16 scrub starts
Dec 13 07:15:28 compute-0 ceph-mon[74928]: 7.16 scrub ok
Dec 13 07:15:28 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 13 07:15:28 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 13 07:15:29 compute-0 ceph-mon[74928]: pgmap v89: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:29 compute-0 ceph-mon[74928]: 6.3 scrub starts
Dec 13 07:15:29 compute-0 ceph-mon[74928]: 6.3 scrub ok
Dec 13 07:15:29 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 13 07:15:29 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 13 07:15:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v90: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:30 compute-0 ceph-mon[74928]: 6.1b scrub starts
Dec 13 07:15:30 compute-0 ceph-mon[74928]: 6.1b scrub ok
Dec 13 07:15:30 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 13 07:15:30 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 13 07:15:31 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 13 07:15:31 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 13 07:15:31 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 13 07:15:31 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 13 07:15:31 compute-0 ceph-mon[74928]: pgmap v90: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:31 compute-0 ceph-mon[74928]: 4.19 scrub starts
Dec 13 07:15:31 compute-0 ceph-mon[74928]: 4.19 scrub ok
Dec 13 07:15:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v91: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:32 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 13 07:15:32 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 13 07:15:32 compute-0 ceph-mon[74928]: 3.10 scrub starts
Dec 13 07:15:32 compute-0 ceph-mon[74928]: 3.10 scrub ok
Dec 13 07:15:32 compute-0 ceph-mon[74928]: 5.a scrub starts
Dec 13 07:15:32 compute-0 ceph-mon[74928]: 5.a scrub ok
Dec 13 07:15:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 13 07:15:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 13 07:15:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:33 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 13 07:15:33 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 13 07:15:33 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 13 07:15:33 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 13 07:15:33 compute-0 ceph-mon[74928]: pgmap v91: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:33 compute-0 ceph-mon[74928]: 2.c scrub starts
Dec 13 07:15:33 compute-0 ceph-mon[74928]: 2.c scrub ok
Dec 13 07:15:33 compute-0 ceph-mon[74928]: 6.18 scrub starts
Dec 13 07:15:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v92: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:34 compute-0 ceph-mon[74928]: 6.18 scrub ok
Dec 13 07:15:34 compute-0 ceph-mon[74928]: 7.14 scrub starts
Dec 13 07:15:34 compute-0 ceph-mon[74928]: 7.14 scrub ok
Dec 13 07:15:34 compute-0 ceph-mon[74928]: 5.b scrub starts
Dec 13 07:15:34 compute-0 ceph-mon[74928]: 5.b scrub ok
Dec 13 07:15:35 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 13 07:15:35 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 13 07:15:35 compute-0 ceph-mon[74928]: pgmap v92: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v93: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:36 compute-0 ceph-mon[74928]: 2.0 scrub starts
Dec 13 07:15:36 compute-0 ceph-mon[74928]: 2.0 scrub ok
Dec 13 07:15:37 compute-0 ceph-mon[74928]: pgmap v93: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:37 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 13 07:15:37 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 13 07:15:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:15:38
Dec 13 07:15:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:15:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:15:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images']
Dec 13 07:15:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:15:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v94: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:38 compute-0 ceph-mon[74928]: 6.7 scrub starts
Dec 13 07:15:38 compute-0 ceph-mon[74928]: 6.7 scrub ok
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:15:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:15:39 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 13 07:15:39 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 13 07:15:39 compute-0 ceph-mon[74928]: pgmap v94: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v95: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:40 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 13 07:15:40 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 13 07:15:40 compute-0 ceph-mon[74928]: 7.b scrub starts
Dec 13 07:15:40 compute-0 ceph-mon[74928]: 7.b scrub ok
Dec 13 07:15:40 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 13 07:15:40 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 13 07:15:41 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 13 07:15:41 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 13 07:15:41 compute-0 ceph-mon[74928]: pgmap v95: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:41 compute-0 ceph-mon[74928]: 5.6 scrub starts
Dec 13 07:15:41 compute-0 ceph-mon[74928]: 5.6 scrub ok
Dec 13 07:15:41 compute-0 ceph-mon[74928]: 6.19 scrub starts
Dec 13 07:15:41 compute-0 ceph-mon[74928]: 6.19 scrub ok
Dec 13 07:15:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v96: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:42 compute-0 ceph-mon[74928]: 2.1 scrub starts
Dec 13 07:15:42 compute-0 ceph-mon[74928]: 2.1 scrub ok
Dec 13 07:15:42 compute-0 sudo[96555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemidmssyqqxmlhhlczyprezygdkzxne ; /usr/bin/python3'
Dec 13 07:15:42 compute-0 sudo[96555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:42 compute-0 python3[96557]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.552617714 +0000 UTC m=+0.027368757 container create 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:42 compute-0 systemd[1]: Started libpod-conmon-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope.
Dec 13 07:15:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18e4cc1d91ff0b26569a29e00356b155b7f940e37a97f098740ef70bb3ab69e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18e4cc1d91ff0b26569a29e00356b155b7f940e37a97f098740ef70bb3ab69e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.606847105 +0000 UTC m=+0.081598157 container init 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.611776776 +0000 UTC m=+0.086527818 container start 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.613071211 +0000 UTC m=+0.087822253 container attach 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.542156523 +0000 UTC m=+0.016907566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:42 compute-0 unruffled_mendel[96570]: could not fetch user info: no user info saved
Dec 13 07:15:42 compute-0 systemd[1]: libpod-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope: Deactivated successfully.
Dec 13 07:15:42 compute-0 conmon[96570]: conmon 1c14586c1853758710ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope/container/memory.events
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.715718657 +0000 UTC m=+0.190469699 container died 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c18e4cc1d91ff0b26569a29e00356b155b7f940e37a97f098740ef70bb3ab69e-merged.mount: Deactivated successfully.
Dec 13 07:15:42 compute-0 podman[96558]: 2025-12-13 07:15:42.732929789 +0000 UTC m=+0.207680830 container remove 1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f (image=quay.io/ceph/ceph:v20, name=unruffled_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:15:42 compute-0 sudo[96555]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:42 compute-0 systemd[1]: libpod-conmon-1c14586c1853758710efe9d110425a7561d13c7c97c91fd77db0dd229a02110f.scope: Deactivated successfully.
Dec 13 07:15:42 compute-0 sudo[96689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idzjiosdabetkflzexnvjecikdkdtlok ; /usr/bin/python3'
Dec 13 07:15:42 compute-0 sudo[96689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:42 compute-0 python3[96691]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:43.006800788 +0000 UTC m=+0.029476394 container create 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:15:43 compute-0 systemd[1]: Started libpod-conmon-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope.
Dec 13 07:15:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa05481457bf40776ab14d0c022c296355100c1609d3f319dc2a011c6948befe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa05481457bf40776ab14d0c022c296355100c1609d3f319dc2a011c6948befe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:43.05142532 +0000 UTC m=+0.074100936 container init 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:43.055394453 +0000 UTC m=+0.078070059 container start 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:43.056447615 +0000 UTC m=+0.079123221 container attach 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:42.994223443 +0000 UTC m=+0.016899059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]: {
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "user_id": "openstack",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "display_name": "openstack",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "email": "",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "suspended": 0,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "max_buckets": 1000,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "subusers": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "keys": [
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         {
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:             "user": "openstack",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:             "access_key": "NBMMPVNSN1MZ8JU3B7M3",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:             "secret_key": "CRUCrylGoRevblZZMdJcJtQm0MBioJUeDl2glffW",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:             "active": true,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:             "create_date": "2025-12-13T07:15:43.145899Z"
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         }
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     ],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "swift_keys": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "caps": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "op_mask": "read, write, delete",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "default_placement": "",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "default_storage_class": "",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "placement_tags": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "bucket_quota": {
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "enabled": false,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "check_on_raw": false,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "max_size": -1,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "max_size_kb": 0,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "max_objects": -1
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     },
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "user_quota": {
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "enabled": false,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "check_on_raw": false,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "max_size": -1,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "max_size_kb": 0,
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:         "max_objects": -1
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     },
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "temp_url_keys": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "type": "rgw",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "mfa_ids": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "account_id": "",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "path": "/",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "create_date": "2025-12-13T07:15:43.145708Z",
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "tags": [],
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]:     "group_ids": []
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]: }
Dec 13 07:15:43 compute-0 heuristic_lichterman[96704]: 
Dec 13 07:15:43 compute-0 systemd[1]: libpod-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope: Deactivated successfully.
Dec 13 07:15:43 compute-0 conmon[96704]: conmon 4216caef7ee7bafcf8d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope/container/memory.events
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:43.163323369 +0000 UTC m=+0.185998975 container died 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa05481457bf40776ab14d0c022c296355100c1609d3f319dc2a011c6948befe-merged.mount: Deactivated successfully.
Dec 13 07:15:43 compute-0 podman[96692]: 2025-12-13 07:15:43.181107536 +0000 UTC m=+0.203783142 container remove 4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2 (image=quay.io/ceph/ceph:v20, name=heuristic_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:15:43 compute-0 systemd[76210]: Starting Mark boot as successful...
Dec 13 07:15:43 compute-0 systemd[76210]: Finished Mark boot as successful.
Dec 13 07:15:43 compute-0 systemd[1]: libpod-conmon-4216caef7ee7bafcf8d389fa852b8712bcb450b15d77af16c07d3b885ed752d2.scope: Deactivated successfully.
Dec 13 07:15:43 compute-0 sudo[96689]: pam_unix(sudo:session): session closed for user root
Dec 13 07:15:43 compute-0 ceph-mon[74928]: pgmap v96: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:43 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 13 07:15:43 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v97: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:44 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 13 07:15:44 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 13 07:15:44 compute-0 ceph-mon[74928]: 4.6 scrub starts
Dec 13 07:15:44 compute-0 ceph-mon[74928]: 4.6 scrub ok
Dec 13 07:15:44 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 13 07:15:44 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.445826042658682e-07 of space, bias 4.0, pg target 0.0008934991251190418 quantized to 16 (current 32)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:15:44 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 07:15:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:15:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 13 07:15:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 13 07:15:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 13 07:15:45 compute-0 ceph-mon[74928]: pgmap v97: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:45 compute-0 ceph-mon[74928]: 3.d scrub starts
Dec 13 07:15:45 compute-0 ceph-mon[74928]: 3.d scrub ok
Dec 13 07:15:45 compute-0 ceph-mon[74928]: 4.b scrub starts
Dec 13 07:15:45 compute-0 ceph-mon[74928]: 4.b scrub ok
Dec 13 07:15:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 13 07:15:45 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 13 07:15:45 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 12420bfd-0b2b-436d-86b0-cbce48bccd77 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 07:15:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:15:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v99: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 28 op/s
Dec 13 07:15:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:15:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 13 07:15:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 13 07:15:46 compute-0 ceph-mon[74928]: 5.e scrub starts
Dec 13 07:15:46 compute-0 ceph-mon[74928]: 5.e scrub ok
Dec 13 07:15:46 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:46 compute-0 ceph-mon[74928]: osdmap e44: 3 total, 3 up, 3 in
Dec 13 07:15:46 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:46 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:46 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 45 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=8.345492363s) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 36'5 mlcod 36'5 active pruub 99.056396484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:46 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 13 07:15:46 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 7f57f5cd-035c-4a65-8d09-7296c792b467 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 07:15:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:15:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:46 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 45 pg[8.0( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=8.345492363s) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 36'5 mlcod 0'0 unknown pruub 99.056396484s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:46 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fe0176e00 space 0x560fe0549440 0x0~424 clean)
Dec 13 07:15:46 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fdec95f80 space 0x560fdfa54240 0x0~1b4 clean)
Dec 13 07:15:46 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fe0181d00 space 0x560fe1852840 0x0~2e clean)
Dec 13 07:15:46 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x560fe1347b00) split_cache   moving buffer(0x560fe01fd900 space 0x560fe13b1740 0x0~2e clean)
Dec 13 07:15:47 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 13 07:15:47 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 13 07:15:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 13 07:15:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.13( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.19( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.5( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.7( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.8( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.3( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=35/36 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.17( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.16( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:47 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev 9aa03648-618e-4794-8c1f-b13a271ecad7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 07:15:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 07:15:47 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.19( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.5( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 36'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.7( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.3( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.13( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.16( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 46 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=35/35 les/c/f=36/36/0 sis=45) [1] r=0 lpr=45 pi=[35,45)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:47 compute-0 ceph-mon[74928]: pgmap v99: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 28 op/s
Dec 13 07:15:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:47 compute-0 ceph-mon[74928]: osdmap e45: 3 total, 3 up, 3 in
Dec 13 07:15:47 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:47 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 13 07:15:47 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 13 07:15:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v102: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 46 op/s
Dec 13 07:15:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:15:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:15:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 13 07:15:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 13 07:15:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 13 07:15:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 13 07:15:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 47 pg[9.0( v 43'551 (0'0,43'551] local-lis/les=37/38 n=210 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.344427109s) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 43'550 mlcod 43'550 active pruub 101.061424255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:48 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] update: starting ev adb4db96-1a14-4bf8-a24a-e6cee7f34630 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 12420bfd-0b2b-436d-86b0-cbce48bccd77 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 12420bfd-0b2b-436d-86b0-cbce48bccd77 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 7f57f5cd-035c-4a65-8d09-7296c792b467 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 7f57f5cd-035c-4a65-8d09-7296c792b467 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev 9aa03648-618e-4794-8c1f-b13a271ecad7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event 9aa03648-618e-4794-8c1f-b13a271ecad7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] complete: finished ev adb4db96-1a14-4bf8-a24a-e6cee7f34630 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 07:15:48 compute-0 ceph-mgr[75200]: [progress INFO root] Completed event adb4db96-1a14-4bf8-a24a-e6cee7f34630 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 13 07:15:48 compute-0 ceph-mon[74928]: 5.d scrub starts
Dec 13 07:15:48 compute-0 ceph-mon[74928]: 5.d scrub ok
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: osdmap e46: 3 total, 3 up, 3 in
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 07:15:48 compute-0 ceph-mon[74928]: 6.9 scrub starts
Dec 13 07:15:48 compute-0 ceph-mon[74928]: 6.9 scrub ok
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:48 compute-0 ceph-mon[74928]: osdmap e47: 3 total, 3 up, 3 in
Dec 13 07:15:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 47 pg[9.0( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.344427109s) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 43'550 mlcod 0'0 unknown pruub 101.061424255s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa480 space 0x560fe1afab40 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0181100 space 0x560fe1a47440 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aab80 space 0x560fe1a4d140 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f0b00 space 0x560fe1aa2840 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b480 space 0x560fe1aa4840 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8200 space 0x560fe1809140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0193000 space 0x560fdf5d2b40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192580 space 0x560fe17f1a40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bf80 space 0x560fe17ec540 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192980 space 0x560fe17f0840 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b500 space 0x560fe17f9740 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0180100 space 0x560fe18b1d40 0x0~1c clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdde7c300 space 0x560fdf5d2240 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe012a300 space 0x560fe1a93140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192780 space 0x560fe17f1140 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f1600 space 0x560fe1a78b40 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b300 space 0x560fe1800240 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b080 space 0x560fe1aa5740 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187600 space 0x560fe1a11740 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f0180 space 0x560fe1801d40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192e00 space 0x560fdf5d3440 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187c00 space 0x560fe1803140 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b100 space 0x560fe1800b40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01fd580 space 0x560fe17b1740 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b700 space 0x560fe17f8e40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe016a080 space 0x560fe1a6a240 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192c00 space 0x560fdf5d3d40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00ab800 space 0x560fe1aa5140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8e00 space 0x560fdfa55a40 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa380 space 0x560fe1a93a40 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019ad80 space 0x560fe174ce40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b880 space 0x560fe1a92840 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019ab00 space 0x560fe1801440 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdfea2700 space 0x560fe1acb140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aaa80 space 0x560fe13b1440 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192380 space 0x560fe17bf140 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa100 space 0x560fe1a47140 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bd00 space 0x560fe17ece40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bb00 space 0x560fdf797440 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187480 space 0x560fe1a4ce40 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8800 space 0x560fe1801140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019bf00 space 0x560fe17ed740 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe019b900 space 0x560fe17f8540 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aad00 space 0x560fe1797a40 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe012aa00 space 0x560fe17b0e40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdfe9b280 space 0x560fe1aa3140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187880 space 0x560fe156c540 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00b8980 space 0x560fe1852540 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe01f0800 space 0x560fe17b0540 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00aa200 space 0x560fdf797140 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187f80 space 0x560fe1a10540 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe00ab100 space 0x560fe179e240 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fdffe1d80 space 0x560fe1802840 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe016bc00 space 0x560fe1853740 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe016b680 space 0x560fe17bfa40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192500 space 0x560fe1796e40 0x0~9a clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187800 space 0x560fe1a10e40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0192180 space 0x560fe17be840 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0203e00 space 0x560fe1a5c840 0x0~98 clean)
Dec 13 07:15:48 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x560fe16efb00) split_cache   moving buffer(0x560fe0187d80 space 0x560fe1803a40 0x0~6e clean)
Dec 13 07:15:48 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 13 07:15:48 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 13 07:15:48 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 47 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=39/40 n=9 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 43'65 active pruub 100.238204956s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:48 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 47 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 unknown pruub 100.238204956s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-mgr[75200]: [progress INFO root] Writing back 16 completed events
Dec 13 07:15:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 07:15:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:49 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 13 07:15:49 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 13 07:15:49 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 13 07:15:49 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 13 07:15:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 13 07:15:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.14( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.11( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.3( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.2( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.9( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.a( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.4( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.5( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1a( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.12( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.10( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=37/38 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.14( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.0( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 43'550 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.2( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.a( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.4( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.5( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1a( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.12( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.10( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-mon[74928]: pgmap v102: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 46 op/s
Dec 13 07:15:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 48 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=47 pi=[37,47)/1 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-mon[74928]: 3.b scrub starts
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-mon[74928]: 3.b scrub ok
Dec 13 07:15:49 compute-0 ceph-mon[74928]: 6.5 scrub starts
Dec 13 07:15:49 compute-0 ceph-mon[74928]: 6.5 scrub ok
Dec 13 07:15:49 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:49 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v105: 290 pgs: 93 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 07:15:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:50 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 13 07:15:50 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 13 07:15:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 13 07:15:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 13 07:15:50 compute-0 ceph-mon[74928]: 3.2 scrub starts
Dec 13 07:15:50 compute-0 ceph-mon[74928]: 3.2 scrub ok
Dec 13 07:15:50 compute-0 ceph-mon[74928]: 5.1b scrub starts
Dec 13 07:15:50 compute-0 ceph-mon[74928]: 5.1b scrub ok
Dec 13 07:15:50 compute-0 ceph-mon[74928]: osdmap e48: 3 total, 3 up, 3 in
Dec 13 07:15:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 07:15:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 13 07:15:50 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 49 pg[11.0( v 43'2 (0'0,43'2] local-lis/les=41/42 n=2 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=10.106846809s) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 43'1 mlcod 43'1 active pruub 105.078475952s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:50 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 49 pg[11.0( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=49 pruub=10.106846809s) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 43'1 mlcod 0'0 unknown pruub 105.078475952s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 13 07:15:51 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 13 07:15:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 13 07:15:51 compute-0 ceph-mon[74928]: pgmap v105: 290 pgs: 93 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:51 compute-0 ceph-mon[74928]: 3.0 scrub starts
Dec 13 07:15:51 compute-0 ceph-mon[74928]: 3.0 scrub ok
Dec 13 07:15:51 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 07:15:51 compute-0 ceph-mon[74928]: osdmap e49: 3 total, 3 up, 3 in
Dec 13 07:15:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 13 07:15:51 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.16( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.13( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=41/42 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.5( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.7( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.16( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.13( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.0( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 43'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=41/42 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.5( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.7( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v108: 321 pgs: 62 unknown, 259 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:52 compute-0 ceph-mon[74928]: 7.0 scrub starts
Dec 13 07:15:52 compute-0 ceph-mon[74928]: 7.0 scrub ok
Dec 13 07:15:52 compute-0 ceph-mon[74928]: osdmap e50: 3 total, 3 up, 3 in
Dec 13 07:15:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 13 07:15:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 13 07:15:53 compute-0 ceph-mon[74928]: pgmap v108: 321 pgs: 62 unknown, 259 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v109: 321 pgs: 31 unknown, 290 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:54 compute-0 ceph-mon[74928]: 3.4 scrub starts
Dec 13 07:15:54 compute-0 ceph-mon[74928]: 3.4 scrub ok
Dec 13 07:15:55 compute-0 sshd-session[96802]: Accepted publickey for zuul from 192.168.122.30 port 44996 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:15:55 compute-0 systemd-logind[745]: New session 33 of user zuul.
Dec 13 07:15:55 compute-0 systemd[1]: Started Session 33 of User zuul.
Dec 13 07:15:55 compute-0 sshd-session[96802]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:15:55 compute-0 ceph-mon[74928]: pgmap v109: 321 pgs: 31 unknown, 290 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:55 compute-0 python3.9[96955]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:15:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v110: 321 pgs: 321 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 13 07:15:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:15:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 13 07:15:56 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 13 07:15:56 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 13 07:15:56 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863642693s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896354675s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863616943s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863552094s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896423340s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863612175s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865627289s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898643494s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863466263s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863352776s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896545410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863326073s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896537781s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864120483s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897537231s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863996506s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897552490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863911629s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862752914s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896553040s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863280296s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863158226s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897590637s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862961769s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897644043s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863029480s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897651672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862700462s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897689819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862665176s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897674561s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862836838s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.898017883s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862802505s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898086548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862488747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898033142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860626221s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844452858s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718498230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854372978s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.729553223s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843180656s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718505859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872672081s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748138428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854546547s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730064392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841082573s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852649689s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730140686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870583534s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748161316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870308876s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840455055s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852058411s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730171204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869616508s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839732170s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718414307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851435661s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869359016s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839279175s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869021416s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748207092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839054108s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718353271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850866318s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730262756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868752480s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748222351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850701332s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868589401s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748245239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850529671s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838470459s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718345642s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868296623s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748268127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838410378s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838177681s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718376160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867964745s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748283386s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849902153s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730323792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867810249s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748298645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837768555s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849675179s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730346680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868008614s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867127419s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748153687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849251747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 active pruub 109.730361938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867587090s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836945534s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718269348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867397308s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748802185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867756844s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848821640s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730400085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867094994s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748840332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836301804s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718193054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867068291s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835968971s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718185425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866882324s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848036766s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730422974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835701942s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718177795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866633415s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749198914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866514206s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749183655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832565308s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715339661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848815918s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.731674194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866249084s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749206543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835205078s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866911888s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.750068665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847294807s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834938049s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846091270s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730407715s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830695152s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715332031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833575249s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863280296s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748130798s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:56 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:56 compute-0 sudo[97171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-medtlbnjxnakzmzigxccobnbaxgtjjiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610156.571233-32-171845450933666/AnsiballZ_command.py'
Dec 13 07:15:56 compute-0 sudo[97171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:15:57 compute-0 python3.9[97173]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:15:57 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 13 07:15:57 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 13 07:15:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 13 07:15:57 compute-0 ceph-mon[74928]: pgmap v110: 321 pgs: 321 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:15:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:15:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 07:15:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:15:57 compute-0 ceph-mon[74928]: osdmap e51: 3 total, 3 up, 3 in
Dec 13 07:15:57 compute-0 ceph-mon[74928]: 2.1e scrub starts
Dec 13 07:15:57 compute-0 ceph-mon[74928]: 2.1e scrub ok
Dec 13 07:15:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 13 07:15:57 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:15:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 13 07:15:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v113: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 13 07:15:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 13 07:15:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 13 07:15:58 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 13 07:15:58 compute-0 ceph-mon[74928]: 5.0 scrub starts
Dec 13 07:15:58 compute-0 ceph-mon[74928]: 5.0 scrub ok
Dec 13 07:15:58 compute-0 ceph-mon[74928]: osdmap e52: 3 total, 3 up, 3 in
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:58 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:15:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 13 07:15:59 compute-0 ceph-mon[74928]: 7.7 scrub starts
Dec 13 07:15:59 compute-0 ceph-mon[74928]: pgmap v113: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:15:59 compute-0 ceph-mon[74928]: 7.7 scrub ok
Dec 13 07:15:59 compute-0 ceph-mon[74928]: osdmap e53: 3 total, 3 up, 3 in
Dec 13 07:15:59 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 13 07:15:59 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327981949s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118011475s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327603340s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117866516s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327441216s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117881775s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326743126s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117576599s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326932907s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326637268s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117897034s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323873520s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116912842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324602127s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117736816s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324680328s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118041992s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324029922s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117660522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323761940s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323680878s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117927551s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322302818s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116996765s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323290825s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118003845s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:15:59 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v116: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 364 B/s, 0 objects/s recovering
Dec 13 07:16:00 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 13 07:16:00 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 13 07:16:00 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 13 07:16:00 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 13 07:16:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 13 07:16:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 13 07:16:00 compute-0 ceph-mon[74928]: osdmap e54: 3 total, 3 up, 3 in
Dec 13 07:16:00 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 13 07:16:00 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320955276s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 active pruub 119.118865967s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:00 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:00 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319732666s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 active pruub 119.118080139s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:00 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=48'552 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:01 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 13 07:16:01 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 13 07:16:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 13 07:16:01 compute-0 ceph-mon[74928]: pgmap v116: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 364 B/s, 0 objects/s recovering
Dec 13 07:16:01 compute-0 ceph-mon[74928]: 6.a scrub starts
Dec 13 07:16:01 compute-0 ceph-mon[74928]: 3.18 scrub starts
Dec 13 07:16:01 compute-0 ceph-mon[74928]: 6.a scrub ok
Dec 13 07:16:01 compute-0 ceph-mon[74928]: 3.18 scrub ok
Dec 13 07:16:01 compute-0 ceph-mon[74928]: osdmap e55: 3 total, 3 up, 3 in
Dec 13 07:16:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 13 07:16:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 13 07:16:01 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:01 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v119: 321 pgs: 14 unknown, 2 active+remapped, 32 peering, 273 active+clean; 458 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1023 B/s wr, 26 op/s; 600 B/s, 4 objects/s recovering
Dec 13 07:16:02 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 13 07:16:02 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 13 07:16:02 compute-0 ceph-mon[74928]: 4.1c scrub starts
Dec 13 07:16:02 compute-0 ceph-mon[74928]: 4.1c scrub ok
Dec 13 07:16:02 compute-0 ceph-mon[74928]: osdmap e56: 3 total, 3 up, 3 in
Dec 13 07:16:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:03 compute-0 ceph-mon[74928]: pgmap v119: 321 pgs: 14 unknown, 2 active+remapped, 32 peering, 273 active+clean; 458 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1023 B/s wr, 26 op/s; 600 B/s, 4 objects/s recovering
Dec 13 07:16:03 compute-0 ceph-mon[74928]: 7.1c scrub starts
Dec 13 07:16:03 compute-0 ceph-mon[74928]: 7.1c scrub ok
Dec 13 07:16:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v120: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.7 KiB/s wr, 145 op/s; 832 B/s, 19 objects/s recovering
Dec 13 07:16:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 13 07:16:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 07:16:04 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 13 07:16:04 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 13 07:16:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 13 07:16:04 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 07:16:04 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 07:16:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 13 07:16:04 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 13 07:16:05 compute-0 ceph-mon[74928]: pgmap v120: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.7 KiB/s wr, 145 op/s; 832 B/s, 19 objects/s recovering
Dec 13 07:16:05 compute-0 ceph-mon[74928]: 4.1f scrub starts
Dec 13 07:16:05 compute-0 ceph-mon[74928]: 4.1f scrub ok
Dec 13 07:16:05 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 07:16:05 compute-0 ceph-mon[74928]: osdmap e57: 3 total, 3 up, 3 in
Dec 13 07:16:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v122: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.5 KiB/s wr, 140 op/s; 802 B/s, 18 objects/s recovering
Dec 13 07:16:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 13 07:16:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 07:16:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 13 07:16:06 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 07:16:06 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 07:16:06 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 13 07:16:06 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 13 07:16:06 compute-0 sudo[97180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:16:06 compute-0 sudo[97180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:06 compute-0 sudo[97180]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:06 compute-0 sudo[97205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:16:06 compute-0 sudo[97205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:07 compute-0 sudo[97205]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:16:07 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 13 07:16:07 compute-0 sudo[97259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:16:07 compute-0 sudo[97259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:07 compute-0 sudo[97259]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:07 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 13 07:16:07 compute-0 sudo[97284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:16:07 compute-0 sudo[97284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: pgmap v122: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.5 KiB/s wr, 140 op/s; 802 B/s, 18 objects/s recovering
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 07:16:07 compute-0 ceph-mon[74928]: osdmap e58: 3 total, 3 up, 3 in
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:16:07 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.673424865 +0000 UTC m=+0.026791015 container create 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:16:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:07 compute-0 systemd[1]: Started libpod-conmon-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope.
Dec 13 07:16:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.736997894 +0000 UTC m=+0.090364043 container init 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.742792655 +0000 UTC m=+0.096158805 container start 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.745947824 +0000 UTC m=+0.099313994 container attach 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:16:07 compute-0 adoring_leavitt[97332]: 167 167
Dec 13 07:16:07 compute-0 systemd[1]: libpod-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope: Deactivated successfully.
Dec 13 07:16:07 compute-0 conmon[97332]: conmon 82ea14481616421d8071 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope/container/memory.events
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.747842543 +0000 UTC m=+0.101208693 container died 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4ed489840e4930b4483950a8efbdec4fb2cf7c847380cae6b1f6b508f69a1e6-merged.mount: Deactivated successfully.
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.662713444 +0000 UTC m=+0.016079615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:16:07 compute-0 podman[97319]: 2025-12-13 07:16:07.769295809 +0000 UTC m=+0.122661960 container remove 82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:16:07 compute-0 systemd[1]: libpod-conmon-82ea14481616421d807138becbfb647ea372aea4638e40c0b8852bb2bb8a79bf.scope: Deactivated successfully.
Dec 13 07:16:07 compute-0 podman[97353]: 2025-12-13 07:16:07.880013639 +0000 UTC m=+0.028154688 container create 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:16:07 compute-0 systemd[1]: Started libpod-conmon-08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04.scope.
Dec 13 07:16:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:07 compute-0 podman[97353]: 2025-12-13 07:16:07.936944829 +0000 UTC m=+0.085085888 container init 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:16:07 compute-0 podman[97353]: 2025-12-13 07:16:07.942492709 +0000 UTC m=+0.090633758 container start 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:16:07 compute-0 podman[97353]: 2025-12-13 07:16:07.943623257 +0000 UTC m=+0.091764305 container attach 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:16:07 compute-0 podman[97353]: 2025-12-13 07:16:07.867958984 +0000 UTC m=+0.016100053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:16:08 compute-0 sudo[97171]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v124: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.3 KiB/s wr, 109 op/s; 572 B/s, 13 objects/s recovering
Dec 13 07:16:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 13 07:16:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 07:16:08 compute-0 friendly_lovelace[97366]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:16:08 compute-0 friendly_lovelace[97366]: --> All data devices are unavailable
Dec 13 07:16:08 compute-0 systemd[1]: libpod-08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04.scope: Deactivated successfully.
Dec 13 07:16:08 compute-0 podman[97353]: 2025-12-13 07:16:08.313035031 +0000 UTC m=+0.461176081 container died 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d4864f05d50444d8163148a303843deffddf72be31335b50198f3df52dc9b1b-merged.mount: Deactivated successfully.
Dec 13 07:16:08 compute-0 podman[97353]: 2025-12-13 07:16:08.33514591 +0000 UTC m=+0.483286959 container remove 08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 07:16:08 compute-0 systemd[1]: libpod-conmon-08e9aef1c643ede25a77b736b7cdd5d57dd1d9c6fedb024dbca4445cb2e0da04.scope: Deactivated successfully.
Dec 13 07:16:08 compute-0 sudo[97284]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:08 compute-0 sudo[97419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:16:08 compute-0 sudo[97419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:08 compute-0 sudo[97419]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:08 compute-0 sshd-session[96805]: Connection closed by 192.168.122.30 port 44996
Dec 13 07:16:08 compute-0 sshd-session[96802]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:16:08 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec 13 07:16:08 compute-0 systemd[1]: session-33.scope: Consumed 1.380s CPU time.
Dec 13 07:16:08 compute-0 systemd-logind[745]: Session 33 logged out. Waiting for processes to exit.
Dec 13 07:16:08 compute-0 systemd-logind[745]: Removed session 33.
Dec 13 07:16:08 compute-0 sudo[97444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:16:08 compute-0 sudo[97444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 13 07:16:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 07:16:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 13 07:16:08 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 13 07:16:08 compute-0 ceph-mon[74928]: 4.3 scrub starts
Dec 13 07:16:08 compute-0 ceph-mon[74928]: 4.3 scrub ok
Dec 13 07:16:08 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.675813424 +0000 UTC m=+0.025264566 container create 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:16:08 compute-0 systemd[1]: Started libpod-conmon-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope.
Dec 13 07:16:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.725914943 +0000 UTC m=+0.075366096 container init 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.730249981 +0000 UTC m=+0.079701114 container start 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.731628574 +0000 UTC m=+0.081079705 container attach 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:16:08 compute-0 modest_lewin[97493]: 167 167
Dec 13 07:16:08 compute-0 systemd[1]: libpod-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope: Deactivated successfully.
Dec 13 07:16:08 compute-0 conmon[97493]: conmon 66fdaa6548e1fe584fd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope/container/memory.events
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.733744667 +0000 UTC m=+0.083195819 container died 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-454d080702d7dd9f66664ae4b36265d3a4e8624610064527c805c16d1d1e4939-merged.mount: Deactivated successfully.
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.751877556 +0000 UTC m=+0.101328688 container remove 66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:16:08 compute-0 podman[97480]: 2025-12-13 07:16:08.665763223 +0000 UTC m=+0.015214375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:16:08 compute-0 systemd[1]: libpod-conmon-66fdaa6548e1fe584fd276461a08a983c36d77d0a76c135c4d0d9a562120012c.scope: Deactivated successfully.
Dec 13 07:16:08 compute-0 podman[97515]: 2025-12-13 07:16:08.868168342 +0000 UTC m=+0.028788025 container create 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:16:08 compute-0 systemd[1]: Started libpod-conmon-773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6.scope.
Dec 13 07:16:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:08 compute-0 podman[97515]: 2025-12-13 07:16:08.923922958 +0000 UTC m=+0.084542661 container init 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:16:08 compute-0 podman[97515]: 2025-12-13 07:16:08.929302383 +0000 UTC m=+0.089922066 container start 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:16:08 compute-0 podman[97515]: 2025-12-13 07:16:08.930394287 +0000 UTC m=+0.091013970 container attach 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:16:08 compute-0 podman[97515]: 2025-12-13 07:16:08.855566602 +0000 UTC m=+0.016186305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:16:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:16:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:16:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:16:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:16:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:16:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]: {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:     "0": [
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:         {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "devices": [
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "/dev/loop3"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             ],
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_name": "ceph_lv0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_size": "21470642176",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "name": "ceph_lv0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "tags": {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cluster_name": "ceph",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.crush_device_class": "",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.encrypted": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.objectstore": "bluestore",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osd_id": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.type": "block",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.vdo": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.with_tpm": "0"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             },
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "type": "block",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "vg_name": "ceph_vg0"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:         }
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:     ],
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:     "1": [
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:         {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "devices": [
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "/dev/loop4"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             ],
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_name": "ceph_lv1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_size": "21470642176",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "name": "ceph_lv1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "tags": {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cluster_name": "ceph",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.crush_device_class": "",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.encrypted": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.objectstore": "bluestore",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osd_id": "1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.type": "block",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.vdo": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.with_tpm": "0"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             },
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "type": "block",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "vg_name": "ceph_vg1"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:         }
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:     ],
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:     "2": [
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:         {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "devices": [
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "/dev/loop5"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             ],
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_name": "ceph_lv2",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_size": "21470642176",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "name": "ceph_lv2",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "tags": {
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.cluster_name": "ceph",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.crush_device_class": "",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.encrypted": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.objectstore": "bluestore",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osd_id": "2",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.type": "block",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.vdo": "0",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:                 "ceph.with_tpm": "0"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             },
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "type": "block",
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:             "vg_name": "ceph_vg2"
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:         }
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]:     ]
Dec 13 07:16:09 compute-0 inspiring_herschel[97528]: }
Dec 13 07:16:09 compute-0 systemd[1]: libpod-773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6.scope: Deactivated successfully.
Dec 13 07:16:09 compute-0 podman[97537]: 2025-12-13 07:16:09.208375514 +0000 UTC m=+0.018011083 container died 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 07:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a17fc286d2740a991f563d140f2a066445d8a1deecdae4887b0f5740db3cfb7-merged.mount: Deactivated successfully.
Dec 13 07:16:09 compute-0 podman[97537]: 2025-12-13 07:16:09.22901131 +0000 UTC m=+0.038646879 container remove 773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:16:09 compute-0 systemd[1]: libpod-conmon-773b59918a2ef59f9d993e0cc6ea4a7c164c262b81b25115a45e793f4a9b67c6.scope: Deactivated successfully.
Dec 13 07:16:09 compute-0 sudo[97444]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:09 compute-0 sudo[97549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:16:09 compute-0 sudo[97549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:09 compute-0 sudo[97549]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:09 compute-0 sudo[97574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:16:09 compute-0 sudo[97574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:09 compute-0 ceph-mon[74928]: pgmap v124: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.3 KiB/s wr, 109 op/s; 572 B/s, 13 objects/s recovering
Dec 13 07:16:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 07:16:09 compute-0 ceph-mon[74928]: osdmap e59: 3 total, 3 up, 3 in
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.580054105 +0000 UTC m=+0.027718152 container create 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:16:09 compute-0 systemd[1]: Started libpod-conmon-443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366.scope.
Dec 13 07:16:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.63955658 +0000 UTC m=+0.087220626 container init 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.644405721 +0000 UTC m=+0.092069757 container start 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.645662865 +0000 UTC m=+0.093326921 container attach 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:16:09 compute-0 ecstatic_jennings[97624]: 167 167
Dec 13 07:16:09 compute-0 systemd[1]: libpod-443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366.scope: Deactivated successfully.
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.647848398 +0000 UTC m=+0.095512445 container died 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.569240854 +0000 UTC m=+0.016904910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:16:09 compute-0 podman[97610]: 2025-12-13 07:16:09.668070319 +0000 UTC m=+0.115734365 container remove 443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jennings, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d8afbe030691f5c12daadafa01d59e7fccd21b7c1fe5b9dee7bb795ddf9f55d-merged.mount: Deactivated successfully.
Dec 13 07:16:09 compute-0 systemd[1]: libpod-conmon-443c021b8be5369bd1c86f219ca6880cd08780cbde94a6e3e80db68737f3a366.scope: Deactivated successfully.
Dec 13 07:16:09 compute-0 podman[97645]: 2025-12-13 07:16:09.777812061 +0000 UTC m=+0.026874582 container create 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:16:09 compute-0 systemd[1]: Started libpod-conmon-548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031.scope.
Dec 13 07:16:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:16:09 compute-0 podman[97645]: 2025-12-13 07:16:09.836369065 +0000 UTC m=+0.085431585 container init 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:16:09 compute-0 podman[97645]: 2025-12-13 07:16:09.841546551 +0000 UTC m=+0.090609071 container start 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:16:09 compute-0 podman[97645]: 2025-12-13 07:16:09.842748543 +0000 UTC m=+0.091811063 container attach 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:16:09 compute-0 podman[97645]: 2025-12-13 07:16:09.767352583 +0000 UTC m=+0.016415123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:16:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v126: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 13 07:16:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 07:16:10 compute-0 lvm[97736]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:16:10 compute-0 lvm[97737]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:16:10 compute-0 lvm[97737]: VG ceph_vg1 finished
Dec 13 07:16:10 compute-0 lvm[97736]: VG ceph_vg0 finished
Dec 13 07:16:10 compute-0 lvm[97740]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:16:10 compute-0 lvm[97740]: VG ceph_vg2 finished
Dec 13 07:16:10 compute-0 awesome_pike[97659]: {}
Dec 13 07:16:10 compute-0 lvm[97743]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:16:10 compute-0 lvm[97743]: VG ceph_vg0 finished
Dec 13 07:16:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 13 07:16:10 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 07:16:10 compute-0 podman[97645]: 2025-12-13 07:16:10.483277621 +0000 UTC m=+0.732340141 container died 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:16:10 compute-0 systemd[1]: libpod-548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031.scope: Deactivated successfully.
Dec 13 07:16:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 07:16:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 13 07:16:10 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 13 07:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2b4e1e85047b8940082097d278825b85c393676363fdf968634a460b2e8dae-merged.mount: Deactivated successfully.
Dec 13 07:16:10 compute-0 podman[97645]: 2025-12-13 07:16:10.50783982 +0000 UTC m=+0.756902339 container remove 548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:16:10 compute-0 systemd[1]: libpod-conmon-548f6ff34e815117b826f7874b1ec17518d16914a4e670c7cc1c93f718e6d031.scope: Deactivated successfully.
Dec 13 07:16:10 compute-0 sudo[97574]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:16:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:16:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:16:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:16:10 compute-0 sudo[97753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:16:10 compute-0 sudo[97753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:16:10 compute-0 sudo[97753]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:11 compute-0 ceph-mon[74928]: pgmap v126: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 07:16:11 compute-0 ceph-mon[74928]: osdmap e60: 3 total, 3 up, 3 in
Dec 13 07:16:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:16:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:16:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v128: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 13 07:16:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 07:16:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 13 07:16:12 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 07:16:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 07:16:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 13 07:16:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887648582s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730354309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887639046s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730476379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887332916s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887020111s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730529785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 13 07:16:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 13 07:16:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:12 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:13 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 13 07:16:13 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 13 07:16:13 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 13 07:16:13 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 13 07:16:13 compute-0 ceph-mon[74928]: pgmap v128: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 07:16:13 compute-0 ceph-mon[74928]: osdmap e61: 3 total, 3 up, 3 in
Dec 13 07:16:13 compute-0 ceph-mon[74928]: osdmap e62: 3 total, 3 up, 3 in
Dec 13 07:16:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 13 07:16:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 13 07:16:13 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 13 07:16:13 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:13 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:13 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:13 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v132: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 13 07:16:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 07:16:14 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 13 07:16:14 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 13 07:16:14 compute-0 ceph-mon[74928]: 7.d scrub starts
Dec 13 07:16:14 compute-0 ceph-mon[74928]: 7.d scrub ok
Dec 13 07:16:14 compute-0 ceph-mon[74928]: 3.16 scrub starts
Dec 13 07:16:14 compute-0 ceph-mon[74928]: 3.16 scrub ok
Dec 13 07:16:14 compute-0 ceph-mon[74928]: osdmap e63: 3 total, 3 up, 3 in
Dec 13 07:16:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 07:16:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 13 07:16:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 07:16:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 13 07:16:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997287750s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.053726196s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997888565s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054489136s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997774124s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054504395s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998094559s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.055038452s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743862152s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949691772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743666649s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949722290s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743508339s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949981689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744418144s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 active pruub 132.951171875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:15 compute-0 ceph-mon[74928]: pgmap v132: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:15 compute-0 ceph-mon[74928]: 3.1c scrub starts
Dec 13 07:16:15 compute-0 ceph-mon[74928]: 3.1c scrub ok
Dec 13 07:16:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 07:16:15 compute-0 ceph-mon[74928]: osdmap e64: 3 total, 3 up, 3 in
Dec 13 07:16:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 13 07:16:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 13 07:16:15 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:15 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v135: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 13 07:16:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 07:16:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 13 07:16:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 13 07:16:16 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 13 07:16:16 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 13 07:16:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 13 07:16:16 compute-0 ceph-mon[74928]: osdmap e65: 3 total, 3 up, 3 in
Dec 13 07:16:16 compute-0 ceph-mon[74928]: pgmap v135: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 07:16:16 compute-0 ceph-mon[74928]: 7.19 scrub starts
Dec 13 07:16:16 compute-0 ceph-mon[74928]: 7.19 scrub ok
Dec 13 07:16:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 07:16:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 13 07:16:16 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 13 07:16:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 13 07:16:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 13 07:16:17 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 13 07:16:17 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774515152s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730743408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774012566s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730667114s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 13 07:16:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012885094s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214294434s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011977196s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214172363s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011684418s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214202881s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.010023117s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 active pruub 140.213317871s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:17 compute-0 ceph-mon[74928]: 5.1e scrub starts
Dec 13 07:16:17 compute-0 ceph-mon[74928]: 5.1e scrub ok
Dec 13 07:16:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 07:16:17 compute-0 ceph-mon[74928]: osdmap e66: 3 total, 3 up, 3 in
Dec 13 07:16:17 compute-0 ceph-mon[74928]: 6.1f scrub starts
Dec 13 07:16:17 compute-0 ceph-mon[74928]: 6.1f scrub ok
Dec 13 07:16:17 compute-0 ceph-mon[74928]: 2.f scrub starts
Dec 13 07:16:17 compute-0 ceph-mon[74928]: osdmap e67: 3 total, 3 up, 3 in
Dec 13 07:16:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v138: 321 pgs: 4 active+remapped, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 13 07:16:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 13 07:16:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 07:16:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 13 07:16:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 07:16:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 13 07:16:18 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 13 07:16:18 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:18 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:18 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:18 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:18 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:18 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:18 compute-0 ceph-mon[74928]: 2.f scrub ok
Dec 13 07:16:18 compute-0 ceph-mon[74928]: pgmap v138: 321 pgs: 4 active+remapped, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 290 B/s, 5 objects/s recovering
Dec 13 07:16:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 07:16:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 07:16:18 compute-0 ceph-mon[74928]: osdmap e68: 3 total, 3 up, 3 in
Dec 13 07:16:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 13 07:16:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 13 07:16:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 13 07:16:19 compute-0 ceph-mon[74928]: 5.1d scrub starts
Dec 13 07:16:19 compute-0 ceph-mon[74928]: 5.1d scrub ok
Dec 13 07:16:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 13 07:16:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 13 07:16:19 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981441498s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056365967s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:19 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:19 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981289864s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056335449s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:19 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:19 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:19 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:19 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:19 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v141: 321 pgs: 4 peering, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 498 B/s, 11 objects/s recovering
Dec 13 07:16:20 compute-0 sshd-session[97778]: Accepted publickey for zuul from 192.168.122.30 port 53536 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:16:20 compute-0 systemd-logind[745]: New session 34 of user zuul.
Dec 13 07:16:20 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec 13 07:16:20 compute-0 sshd-session[97778]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:16:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 13 07:16:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 13 07:16:20 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 13 07:16:20 compute-0 ceph-mon[74928]: osdmap e69: 3 total, 3 up, 3 in
Dec 13 07:16:20 compute-0 ceph-mon[74928]: pgmap v141: 321 pgs: 4 peering, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 498 B/s, 11 objects/s recovering
Dec 13 07:16:20 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:20 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:21 compute-0 python3.9[97931]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:16:21 compute-0 ceph-mon[74928]: osdmap e70: 3 total, 3 up, 3 in
Dec 13 07:16:22 compute-0 sudo[98147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxemtzuyfsxouvtnbuggjrqjvvzgnrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610181.8846822-32-77973216619665/AnsiballZ_command.py'
Dec 13 07:16:22 compute-0 sudo[98147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:16:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v143: 321 pgs: 4 peering, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 183 B/s, 5 objects/s recovering
Dec 13 07:16:22 compute-0 python3.9[98149]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:16:22 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 13 07:16:22 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 13 07:16:22 compute-0 sudo[98147]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:22 compute-0 ceph-mon[74928]: pgmap v143: 321 pgs: 4 peering, 317 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 183 B/s, 5 objects/s recovering
Dec 13 07:16:22 compute-0 ceph-mon[74928]: 2.16 scrub starts
Dec 13 07:16:22 compute-0 ceph-mon[74928]: 2.16 scrub ok
Dec 13 07:16:22 compute-0 sshd-session[97781]: Connection closed by 192.168.122.30 port 53536
Dec 13 07:16:22 compute-0 sshd-session[97778]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:16:22 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 13 07:16:22 compute-0 systemd[1]: session-34.scope: Consumed 1.377s CPU time.
Dec 13 07:16:22 compute-0 systemd-logind[745]: Session 34 logged out. Waiting for processes to exit.
Dec 13 07:16:22 compute-0 systemd-logind[745]: Removed session 34.
Dec 13 07:16:23 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 13 07:16:23 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 13 07:16:23 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 13 07:16:23 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 13 07:16:23 compute-0 ceph-mon[74928]: 5.15 scrub starts
Dec 13 07:16:23 compute-0 ceph-mon[74928]: 5.15 scrub ok
Dec 13 07:16:23 compute-0 ceph-mon[74928]: 4.11 scrub starts
Dec 13 07:16:23 compute-0 ceph-mon[74928]: 4.11 scrub ok
Dec 13 07:16:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 226 B/s, 6 objects/s recovering
Dec 13 07:16:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 07:16:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 07:16:24 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 13 07:16:24 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 13 07:16:24 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 13 07:16:24 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 13 07:16:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 13 07:16:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 07:16:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 13 07:16:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 13 07:16:24 compute-0 ceph-mon[74928]: pgmap v144: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 226 B/s, 6 objects/s recovering
Dec 13 07:16:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 07:16:24 compute-0 ceph-mon[74928]: 2.13 scrub starts
Dec 13 07:16:24 compute-0 ceph-mon[74928]: 2.13 scrub ok
Dec 13 07:16:24 compute-0 ceph-mon[74928]: 6.11 scrub starts
Dec 13 07:16:24 compute-0 ceph-mon[74928]: 6.11 scrub ok
Dec 13 07:16:25 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 13 07:16:25 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 13 07:16:25 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 13 07:16:25 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 13 07:16:25 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 07:16:25 compute-0 ceph-mon[74928]: osdmap e71: 3 total, 3 up, 3 in
Dec 13 07:16:25 compute-0 ceph-mon[74928]: 2.19 scrub starts
Dec 13 07:16:25 compute-0 ceph-mon[74928]: 2.19 scrub ok
Dec 13 07:16:25 compute-0 ceph-mon[74928]: 4.d scrub starts
Dec 13 07:16:25 compute-0 ceph-mon[74928]: 4.d scrub ok
Dec 13 07:16:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 2 objects/s recovering
Dec 13 07:16:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 13 07:16:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 07:16:26 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 13 07:16:26 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 13 07:16:26 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 13 07:16:26 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 13 07:16:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 13 07:16:26 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 07:16:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 13 07:16:26 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 13 07:16:26 compute-0 ceph-mon[74928]: pgmap v146: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 2 objects/s recovering
Dec 13 07:16:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 07:16:26 compute-0 ceph-mon[74928]: 4.13 scrub starts
Dec 13 07:16:26 compute-0 ceph-mon[74928]: 4.f scrub starts
Dec 13 07:16:26 compute-0 ceph-mon[74928]: 4.13 scrub ok
Dec 13 07:16:26 compute-0 ceph-mon[74928]: 4.f scrub ok
Dec 13 07:16:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:27 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 07:16:27 compute-0 ceph-mon[74928]: osdmap e72: 3 total, 3 up, 3 in
Dec 13 07:16:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 1 objects/s recovering
Dec 13 07:16:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 13 07:16:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 07:16:28 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 13 07:16:28 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 13 07:16:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 13 07:16:28 compute-0 ceph-mon[74928]: pgmap v148: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 1 objects/s recovering
Dec 13 07:16:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 07:16:28 compute-0 ceph-mon[74928]: 7.11 scrub starts
Dec 13 07:16:28 compute-0 ceph-mon[74928]: 7.11 scrub ok
Dec 13 07:16:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 07:16:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 13 07:16:28 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 13 07:16:28 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499458313s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730636597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:28 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:28 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499320030s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730758667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:28 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:28 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:28 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 13 07:16:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 07:16:29 compute-0 ceph-mon[74928]: osdmap e73: 3 total, 3 up, 3 in
Dec 13 07:16:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 13 07:16:29 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 13 07:16:29 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:29 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:29 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:29 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 13 07:16:30 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 07:16:30 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 13 07:16:30 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 13 07:16:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 13 07:16:30 compute-0 ceph-mon[74928]: osdmap e74: 3 total, 3 up, 3 in
Dec 13 07:16:30 compute-0 ceph-mon[74928]: pgmap v151: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:30 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 07:16:30 compute-0 ceph-mon[74928]: 6.13 scrub starts
Dec 13 07:16:30 compute-0 ceph-mon[74928]: 6.13 scrub ok
Dec 13 07:16:30 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 07:16:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 13 07:16:30 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 13 07:16:30 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:30 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:31 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 13 07:16:31 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 13 07:16:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 13 07:16:31 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 07:16:31 compute-0 ceph-mon[74928]: osdmap e75: 3 total, 3 up, 3 in
Dec 13 07:16:31 compute-0 ceph-mon[74928]: 6.d scrub starts
Dec 13 07:16:31 compute-0 ceph-mon[74928]: 6.d scrub ok
Dec 13 07:16:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 13 07:16:31 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 13 07:16:31 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031506538s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:31 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:31 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031195641s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:31 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:31 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:31 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:31 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:31 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 13 07:16:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 13 07:16:32 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 13 07:16:32 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 13 07:16:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 13 07:16:32 compute-0 ceph-mon[74928]: osdmap e76: 3 total, 3 up, 3 in
Dec 13 07:16:32 compute-0 ceph-mon[74928]: pgmap v154: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:32 compute-0 ceph-mon[74928]: 5.14 scrub starts
Dec 13 07:16:32 compute-0 ceph-mon[74928]: 5.14 scrub ok
Dec 13 07:16:32 compute-0 ceph-mon[74928]: 6.2 scrub starts
Dec 13 07:16:32 compute-0 ceph-mon[74928]: 6.2 scrub ok
Dec 13 07:16:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 13 07:16:32 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 13 07:16:32 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:32 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:33 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 13 07:16:33 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 13 07:16:33 compute-0 ceph-mon[74928]: osdmap e77: 3 total, 3 up, 3 in
Dec 13 07:16:33 compute-0 ceph-mon[74928]: 4.2 scrub starts
Dec 13 07:16:33 compute-0 ceph-mon[74928]: 4.2 scrub ok
Dec 13 07:16:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 3 objects/s recovering
Dec 13 07:16:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 13 07:16:34 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 07:16:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 13 07:16:34 compute-0 ceph-mon[74928]: pgmap v156: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 3 objects/s recovering
Dec 13 07:16:34 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 07:16:34 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 07:16:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 13 07:16:34 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 13 07:16:35 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 13 07:16:35 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 13 07:16:35 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 07:16:35 compute-0 ceph-mon[74928]: osdmap e78: 3 total, 3 up, 3 in
Dec 13 07:16:35 compute-0 ceph-mon[74928]: 2.11 scrub starts
Dec 13 07:16:35 compute-0 ceph-mon[74928]: 2.11 scrub ok
Dec 13 07:16:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 2 objects/s recovering
Dec 13 07:16:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 13 07:16:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 07:16:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 13 07:16:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 13 07:16:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 13 07:16:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 07:16:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 13 07:16:36 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 13 07:16:36 compute-0 ceph-mon[74928]: pgmap v158: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 2 objects/s recovering
Dec 13 07:16:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 07:16:36 compute-0 ceph-mon[74928]: 4.4 scrub starts
Dec 13 07:16:36 compute-0 ceph-mon[74928]: 4.4 scrub ok
Dec 13 07:16:37 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 13 07:16:37 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 13 07:16:37 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 13 07:16:37 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 13 07:16:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:37 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 07:16:37 compute-0 ceph-mon[74928]: osdmap e79: 3 total, 3 up, 3 in
Dec 13 07:16:37 compute-0 ceph-mon[74928]: 3.12 scrub starts
Dec 13 07:16:37 compute-0 ceph-mon[74928]: 3.12 scrub ok
Dec 13 07:16:37 compute-0 ceph-mon[74928]: 6.15 scrub starts
Dec 13 07:16:37 compute-0 ceph-mon[74928]: 6.15 scrub ok
Dec 13 07:16:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:16:38
Dec 13 07:16:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:16:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:16:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log']
Dec 13 07:16:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:16:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 07:16:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 13 07:16:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 07:16:38 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 13 07:16:38 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 13 07:16:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 13 07:16:38 compute-0 ceph-mon[74928]: pgmap v160: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 07:16:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 07:16:38 compute-0 ceph-mon[74928]: 3.15 scrub starts
Dec 13 07:16:38 compute-0 ceph-mon[74928]: 3.15 scrub ok
Dec 13 07:16:38 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 07:16:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 13 07:16:38 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:16:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:16:39 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 13 07:16:39 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 13 07:16:39 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 13 07:16:39 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 13 07:16:39 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 07:16:39 compute-0 ceph-mon[74928]: osdmap e80: 3 total, 3 up, 3 in
Dec 13 07:16:39 compute-0 ceph-mon[74928]: 6.6 scrub starts
Dec 13 07:16:39 compute-0 ceph-mon[74928]: 6.6 scrub ok
Dec 13 07:16:39 compute-0 ceph-mon[74928]: 7.15 scrub starts
Dec 13 07:16:39 compute-0 ceph-mon[74928]: 7.15 scrub ok
Dec 13 07:16:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 13 07:16:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 07:16:40 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 13 07:16:40 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 13 07:16:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 13 07:16:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 07:16:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 13 07:16:40 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 13 07:16:40 compute-0 ceph-mon[74928]: pgmap v162: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 07:16:40 compute-0 ceph-mon[74928]: 6.4 scrub starts
Dec 13 07:16:40 compute-0 ceph-mon[74928]: 6.4 scrub ok
Dec 13 07:16:41 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 13 07:16:41 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 13 07:16:41 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 07:16:41 compute-0 ceph-mon[74928]: osdmap e81: 3 total, 3 up, 3 in
Dec 13 07:16:41 compute-0 ceph-mon[74928]: 3.17 scrub starts
Dec 13 07:16:41 compute-0 ceph-mon[74928]: 3.17 scrub ok
Dec 13 07:16:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 13 07:16:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 07:16:42 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 13 07:16:42 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 13 07:16:42 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 13 07:16:42 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 13 07:16:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 13 07:16:42 compute-0 ceph-mon[74928]: pgmap v164: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:42 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 07:16:42 compute-0 ceph-mon[74928]: 6.1 scrub starts
Dec 13 07:16:42 compute-0 ceph-mon[74928]: 6.1 scrub ok
Dec 13 07:16:42 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 07:16:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 13 07:16:42 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 13 07:16:43 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 13 07:16:43 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 13 07:16:43 compute-0 ceph-mon[74928]: 6.14 scrub starts
Dec 13 07:16:43 compute-0 ceph-mon[74928]: 6.14 scrub ok
Dec 13 07:16:43 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 07:16:43 compute-0 ceph-mon[74928]: osdmap e82: 3 total, 3 up, 3 in
Dec 13 07:16:43 compute-0 ceph-mon[74928]: 7.a scrub starts
Dec 13 07:16:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 13 07:16:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 07:16:44 compute-0 sshd-session[98180]: Accepted publickey for zuul from 192.168.122.30 port 59822 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:16:44 compute-0 systemd-logind[745]: New session 35 of user zuul.
Dec 13 07:16:44 compute-0 systemd[1]: Started Session 35 of User zuul.
Dec 13 07:16:44 compute-0 sshd-session[98180]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:16:44 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 13 07:16:44 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 13 07:16:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 13 07:16:44 compute-0 ceph-mon[74928]: 7.a scrub ok
Dec 13 07:16:44 compute-0 ceph-mon[74928]: pgmap v166: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:44 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 07:16:44 compute-0 ceph-mon[74928]: 4.7 scrub starts
Dec 13 07:16:44 compute-0 ceph-mon[74928]: 4.7 scrub ok
Dec 13 07:16:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 07:16:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 13 07:16:44 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 13 07:16:44 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.542116165s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 active pruub 163.950912476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:44 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:44 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:45 compute-0 python3.9[98333]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:16:45 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 13 07:16:45 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 13 07:16:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 13 07:16:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 13 07:16:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 13 07:16:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 13 07:16:45 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 13 07:16:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:45 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 07:16:45 compute-0 ceph-mon[74928]: osdmap e83: 3 total, 3 up, 3 in
Dec 13 07:16:45 compute-0 ceph-mon[74928]: 4.5 scrub starts
Dec 13 07:16:45 compute-0 ceph-mon[74928]: 4.5 scrub ok
Dec 13 07:16:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:45 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 13 07:16:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 07:16:46 compute-0 sudo[98549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzjpjaxiugmwpsicdtjjbsaugivsyfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610205.9286716-32-137715374452989/AnsiballZ_command.py'
Dec 13 07:16:46 compute-0 sudo[98549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:16:46 compute-0 python3.9[98551]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:16:46 compute-0 sudo[98549]: pam_unix(sudo:session): session closed for user root
Dec 13 07:16:46 compute-0 sshd-session[98183]: Connection closed by 192.168.122.30 port 59822
Dec 13 07:16:46 compute-0 sshd-session[98180]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:16:46 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 13 07:16:46 compute-0 systemd[1]: session-35.scope: Consumed 1.381s CPU time.
Dec 13 07:16:46 compute-0 systemd-logind[745]: Session 35 logged out. Waiting for processes to exit.
Dec 13 07:16:46 compute-0 systemd-logind[745]: Removed session 35.
Dec 13 07:16:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 13 07:16:46 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 07:16:46 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 13 07:16:46 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 13 07:16:46 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:46 compute-0 ceph-mon[74928]: 3.e scrub starts
Dec 13 07:16:46 compute-0 ceph-mon[74928]: 3.e scrub ok
Dec 13 07:16:46 compute-0 ceph-mon[74928]: osdmap e84: 3 total, 3 up, 3 in
Dec 13 07:16:46 compute-0 ceph-mon[74928]: pgmap v169: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:46 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 07:16:46 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 07:16:46 compute-0 ceph-mon[74928]: osdmap e85: 3 total, 3 up, 3 in
Dec 13 07:16:47 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 13 07:16:47 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 13 07:16:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 13 07:16:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 13 07:16:47 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:47 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:47 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 13 07:16:47 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131115913s) [2] async=[2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 active pruub 170.337738037s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:47 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:47 compute-0 ceph-mon[74928]: 7.13 scrub starts
Dec 13 07:16:47 compute-0 ceph-mon[74928]: 7.13 scrub ok
Dec 13 07:16:47 compute-0 ceph-mon[74928]: osdmap e86: 3 total, 3 up, 3 in
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 1 active+remapped, 320 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 13 07:16:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 13 07:16:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 07:16:48 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 13 07:16:48 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.541771007094942e-07 of space, bias 4.0, pg target 0.0009050125208513931 quantized to 16 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:16:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:16:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 13 07:16:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 13 07:16:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 13 07:16:48 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 07:16:48 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 13 07:16:48 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.744177818s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 active pruub 171.950607300s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:48 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:48 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 13 07:16:48 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:48 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:48 compute-0 ceph-mon[74928]: pgmap v172: 321 pgs: 1 active+remapped, 320 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 13 07:16:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 07:16:48 compute-0 ceph-mon[74928]: 2.8 scrub starts
Dec 13 07:16:48 compute-0 ceph-mon[74928]: 2.8 scrub ok
Dec 13 07:16:48 compute-0 ceph-mon[74928]: 2.d scrub starts
Dec 13 07:16:48 compute-0 ceph-mon[74928]: 2.d scrub ok
Dec 13 07:16:48 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 07:16:48 compute-0 ceph-mon[74928]: osdmap e87: 3 total, 3 up, 3 in
Dec 13 07:16:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 13 07:16:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 13 07:16:49 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:49 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:49 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 13 07:16:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:49 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 13 07:16:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 13 07:16:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 13 07:16:50 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 13 07:16:50 compute-0 ceph-mon[74928]: osdmap e88: 3 total, 3 up, 3 in
Dec 13 07:16:50 compute-0 ceph-mon[74928]: pgmap v175: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Dec 13 07:16:50 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:51 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 13 07:16:51 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 13 07:16:51 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 13 07:16:51 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 13 07:16:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 13 07:16:51 compute-0 ceph-mon[74928]: osdmap e89: 3 total, 3 up, 3 in
Dec 13 07:16:51 compute-0 ceph-mon[74928]: 7.1b scrub starts
Dec 13 07:16:51 compute-0 ceph-mon[74928]: 7.1b scrub ok
Dec 13 07:16:51 compute-0 ceph-mon[74928]: 7.8 scrub starts
Dec 13 07:16:51 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 13 07:16:51 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988631248s) [1] async=[1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 active pruub 174.218078613s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:51 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:51 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:51 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 13 07:16:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 13 07:16:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 13 07:16:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 13 07:16:52 compute-0 ceph-mon[74928]: 7.8 scrub ok
Dec 13 07:16:52 compute-0 ceph-mon[74928]: osdmap e90: 3 total, 3 up, 3 in
Dec 13 07:16:52 compute-0 ceph-mon[74928]: pgmap v178: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:52 compute-0 ceph-mon[74928]: 3.11 scrub starts
Dec 13 07:16:52 compute-0 ceph-mon[74928]: 3.11 scrub ok
Dec 13 07:16:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 13 07:16:52 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 13 07:16:52 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:53 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 13 07:16:53 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 13 07:16:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 13 07:16:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 13 07:16:53 compute-0 ceph-mon[74928]: osdmap e91: 3 total, 3 up, 3 in
Dec 13 07:16:53 compute-0 ceph-mon[74928]: 3.a scrub starts
Dec 13 07:16:53 compute-0 ceph-mon[74928]: 3.a scrub ok
Dec 13 07:16:53 compute-0 ceph-mon[74928]: 4.9 scrub starts
Dec 13 07:16:53 compute-0 ceph-mon[74928]: 4.9 scrub ok
Dec 13 07:16:54 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 13 07:16:54 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 13 07:16:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 13 07:16:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 07:16:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 13 07:16:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 07:16:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 13 07:16:54 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 13 07:16:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970234871s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 active pruub 165.223297119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:54 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:54 compute-0 ceph-mon[74928]: 7.f scrub starts
Dec 13 07:16:54 compute-0 ceph-mon[74928]: 7.f scrub ok
Dec 13 07:16:54 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:54 compute-0 ceph-mon[74928]: pgmap v180: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:16:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 07:16:55 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 13 07:16:55 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 13 07:16:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 13 07:16:55 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 13 07:16:55 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 13 07:16:55 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:55 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:55 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:55 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:55 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 07:16:55 compute-0 ceph-mon[74928]: osdmap e92: 3 total, 3 up, 3 in
Dec 13 07:16:55 compute-0 ceph-mon[74928]: 7.5 scrub starts
Dec 13 07:16:55 compute-0 ceph-mon[74928]: 7.5 scrub ok
Dec 13 07:16:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec 13 07:16:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 13 07:16:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 07:16:56 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 13 07:16:56 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 13 07:16:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 13 07:16:56 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 07:16:56 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 13 07:16:56 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 13 07:16:56 compute-0 ceph-mon[74928]: osdmap e93: 3 total, 3 up, 3 in
Dec 13 07:16:56 compute-0 ceph-mon[74928]: pgmap v183: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec 13 07:16:56 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 07:16:56 compute-0 ceph-mon[74928]: 6.b scrub starts
Dec 13 07:16:56 compute-0 ceph-mon[74928]: 6.b scrub ok
Dec 13 07:16:56 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:57 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 13 07:16:57 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 13 07:16:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:16:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 13 07:16:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 13 07:16:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:57 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:16:57 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 13 07:16:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.050017357s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 active pruub 174.272872925s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:16:57 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:16:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 07:16:57 compute-0 ceph-mon[74928]: osdmap e94: 3 total, 3 up, 3 in
Dec 13 07:16:57 compute-0 ceph-mon[74928]: 7.3 scrub starts
Dec 13 07:16:57 compute-0 ceph-mon[74928]: 7.3 scrub ok
Dec 13 07:16:57 compute-0 ceph-mon[74928]: osdmap e95: 3 total, 3 up, 3 in
Dec 13 07:16:58 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 13 07:16:58 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 13 07:16:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 13 07:16:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 13 07:16:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 07:16:58 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 13 07:16:58 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 13 07:16:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 13 07:16:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 13 07:16:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 13 07:16:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 07:16:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 13 07:16:58 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 13 07:16:58 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:16:58 compute-0 ceph-mon[74928]: 5.3 scrub starts
Dec 13 07:16:58 compute-0 ceph-mon[74928]: 5.3 scrub ok
Dec 13 07:16:58 compute-0 ceph-mon[74928]: pgmap v186: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 13 07:16:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 07:16:58 compute-0 ceph-mon[74928]: 7.2 scrub starts
Dec 13 07:16:58 compute-0 ceph-mon[74928]: 7.2 scrub ok
Dec 13 07:16:58 compute-0 ceph-mon[74928]: 5.9 scrub starts
Dec 13 07:16:58 compute-0 ceph-mon[74928]: 5.9 scrub ok
Dec 13 07:16:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 07:16:58 compute-0 ceph-mon[74928]: osdmap e96: 3 total, 3 up, 3 in
Dec 13 07:16:59 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 13 07:16:59 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 13 07:16:59 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 13 07:16:59 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 13 07:16:59 compute-0 ceph-mon[74928]: 4.1 scrub starts
Dec 13 07:16:59 compute-0 ceph-mon[74928]: 4.1 scrub ok
Dec 13 07:16:59 compute-0 ceph-mon[74928]: 4.8 scrub starts
Dec 13 07:16:59 compute-0 ceph-mon[74928]: 4.8 scrub ok
Dec 13 07:17:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 13 07:17:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 07:17:00 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 13 07:17:00 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 13 07:17:00 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 13 07:17:00 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 13 07:17:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 13 07:17:00 compute-0 ceph-mon[74928]: pgmap v188: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:00 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 07:17:00 compute-0 ceph-mon[74928]: 7.1 scrub starts
Dec 13 07:17:00 compute-0 ceph-mon[74928]: 7.1 scrub ok
Dec 13 07:17:00 compute-0 ceph-mon[74928]: 5.16 scrub starts
Dec 13 07:17:00 compute-0 ceph-mon[74928]: 5.16 scrub ok
Dec 13 07:17:00 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 07:17:00 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 13 07:17:00 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 13 07:17:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676051140s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 active pruub 179.950790405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:00 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:00 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 13 07:17:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 13 07:17:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 13 07:17:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 13 07:17:01 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 13 07:17:01 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 07:17:01 compute-0 ceph-mon[74928]: osdmap e97: 3 total, 3 up, 3 in
Dec 13 07:17:01 compute-0 ceph-mon[74928]: 3.6 scrub starts
Dec 13 07:17:01 compute-0 ceph-mon[74928]: 3.6 scrub ok
Dec 13 07:17:01 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:01 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:01 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:01 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 13 07:17:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 07:17:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 13 07:17:02 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 07:17:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 13 07:17:02 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 13 07:17:02 compute-0 ceph-mon[74928]: osdmap e98: 3 total, 3 up, 3 in
Dec 13 07:17:02 compute-0 ceph-mon[74928]: pgmap v191: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:02 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 07:17:03 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 13 07:17:03 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:03 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 13 07:17:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 13 07:17:03 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 07:17:03 compute-0 ceph-mon[74928]: osdmap e99: 3 total, 3 up, 3 in
Dec 13 07:17:03 compute-0 ceph-mon[74928]: 4.a scrub starts
Dec 13 07:17:03 compute-0 ceph-mon[74928]: 4.a scrub ok
Dec 13 07:17:03 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 13 07:17:03 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 13 07:17:03 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603092194s) [2] async=[2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 active pruub 186.894073486s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:03 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:03 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:03 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 13 07:17:04 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 13 07:17:04 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 13 07:17:04 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 13 07:17:04 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 13 07:17:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 13 07:17:04 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 13 07:17:04 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 13 07:17:04 compute-0 ceph-mon[74928]: osdmap e100: 3 total, 3 up, 3 in
Dec 13 07:17:04 compute-0 ceph-mon[74928]: pgmap v194: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Dec 13 07:17:04 compute-0 ceph-mon[74928]: 4.e scrub starts
Dec 13 07:17:04 compute-0 ceph-mon[74928]: 4.e scrub ok
Dec 13 07:17:04 compute-0 ceph-mon[74928]: 6.17 scrub starts
Dec 13 07:17:04 compute-0 ceph-mon[74928]: 6.17 scrub ok
Dec 13 07:17:04 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:05 compute-0 ceph-mon[74928]: osdmap e101: 3 total, 3 up, 3 in
Dec 13 07:17:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec 13 07:17:06 compute-0 ceph-mon[74928]: pgmap v196: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Dec 13 07:17:07 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 13 07:17:07 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 13 07:17:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:08 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 13 07:17:08 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 13 07:17:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 13 07:17:09 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 13 07:17:09 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 13 07:17:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:17:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:17:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:17:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:17:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:17:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:17:09 compute-0 ceph-mon[74928]: 3.7 scrub starts
Dec 13 07:17:09 compute-0 ceph-mon[74928]: 3.7 scrub ok
Dec 13 07:17:09 compute-0 ceph-mon[74928]: 5.2 scrub starts
Dec 13 07:17:09 compute-0 ceph-mon[74928]: 5.2 scrub ok
Dec 13 07:17:09 compute-0 ceph-mon[74928]: pgmap v197: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 13 07:17:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Dec 13 07:17:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 13 07:17:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 07:17:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 13 07:17:10 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 07:17:10 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 13 07:17:10 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 13 07:17:10 compute-0 ceph-mon[74928]: 2.2 scrub starts
Dec 13 07:17:10 compute-0 ceph-mon[74928]: 2.2 scrub ok
Dec 13 07:17:10 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 07:17:10 compute-0 sudo[98582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:17:10 compute-0 sudo[98582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:10 compute-0 sudo[98582]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:10 compute-0 sudo[98607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:17:10 compute-0 sudo[98607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:11 compute-0 sudo[98607]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:17:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:17:11 compute-0 sudo[98661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:17:11 compute-0 sudo[98661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:11 compute-0 sudo[98661]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:11 compute-0 sudo[98686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:17:11 compute-0 sudo[98686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:11 compute-0 ceph-mon[74928]: pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 1 objects/s recovering
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 07:17:11 compute-0 ceph-mon[74928]: osdmap e102: 3 total, 3 up, 3 in
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:17:11 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.376137066 +0000 UTC m=+0.026139257 container create 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 07:17:11 compute-0 systemd[1]: Started libpod-conmon-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope.
Dec 13 07:17:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.428853712 +0000 UTC m=+0.078855913 container init 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.433915229 +0000 UTC m=+0.083917410 container start 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.434828466 +0000 UTC m=+0.084830666 container attach 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:17:11 compute-0 priceless_fermat[98735]: 167 167
Dec 13 07:17:11 compute-0 systemd[1]: libpod-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope: Deactivated successfully.
Dec 13 07:17:11 compute-0 conmon[98735]: conmon 32133b2d40986e3e7b4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope/container/memory.events
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.438294141 +0000 UTC m=+0.088296323 container died 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb37cb0d0af3468a544873094f47f0cb4f1b5ee20c3aba764bfab891467521d2-merged.mount: Deactivated successfully.
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.457682506 +0000 UTC m=+0.107684686 container remove 32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:17:11 compute-0 podman[98722]: 2025-12-13 07:17:11.365044109 +0000 UTC m=+0.015046300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:17:11 compute-0 systemd[1]: libpod-conmon-32133b2d40986e3e7b4e89aea1dca17a273ecd891a4d624de6e3c79c3f726090.scope: Deactivated successfully.
Dec 13 07:17:11 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 13 07:17:11 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 13 07:17:11 compute-0 podman[98757]: 2025-12-13 07:17:11.567781881 +0000 UTC m=+0.026318825 container create 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:17:11 compute-0 systemd[1]: Started libpod-conmon-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope.
Dec 13 07:17:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:11 compute-0 podman[98757]: 2025-12-13 07:17:11.623649321 +0000 UTC m=+0.082186256 container init 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:17:11 compute-0 podman[98757]: 2025-12-13 07:17:11.629633903 +0000 UTC m=+0.088170837 container start 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 07:17:11 compute-0 podman[98757]: 2025-12-13 07:17:11.630771963 +0000 UTC m=+0.089308897 container attach 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:17:11 compute-0 podman[98757]: 2025-12-13 07:17:11.557571233 +0000 UTC m=+0.016108187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:17:11 compute-0 cool_fermi[98770]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:17:11 compute-0 cool_fermi[98770]: --> All data devices are unavailable
Dec 13 07:17:11 compute-0 systemd[1]: libpod-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope: Deactivated successfully.
Dec 13 07:17:11 compute-0 conmon[98770]: conmon 3b8e1449480c6725e3b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope/container/memory.events
Dec 13 07:17:11 compute-0 podman[98757]: 2025-12-13 07:17:11.983226508 +0000 UTC m=+0.441763442 container died 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3d609945e0dabe93181a9073fb9749b51368eef93287282f51163d9b4b375b-merged.mount: Deactivated successfully.
Dec 13 07:17:12 compute-0 podman[98757]: 2025-12-13 07:17:12.003084435 +0000 UTC m=+0.461621368 container remove 3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:17:12 compute-0 systemd[1]: libpod-conmon-3b8e1449480c6725e3b9d2f87f114024d71000986b1483dc7a47a123c03ca830.scope: Deactivated successfully.
Dec 13 07:17:12 compute-0 sudo[98686]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:12 compute-0 sudo[98801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:17:12 compute-0 sudo[98801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:12 compute-0 sudo[98801]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:12 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 13 07:17:12 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 13 07:17:12 compute-0 sudo[98826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:17:12 compute-0 sudo[98826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Dec 13 07:17:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 13 07:17:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 07:17:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 13 07:17:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 07:17:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 13 07:17:12 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 13 07:17:12 compute-0 ceph-mon[74928]: 3.8 scrub starts
Dec 13 07:17:12 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.338398693 +0000 UTC m=+0.027200332 container create 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:17:12 compute-0 systemd[1]: Started libpod-conmon-721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015.scope.
Dec 13 07:17:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.376804351 +0000 UTC m=+0.065606010 container init 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.381220505 +0000 UTC m=+0.070022144 container start 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.382672966 +0000 UTC m=+0.071474605 container attach 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:17:12 compute-0 practical_cray[98875]: 167 167
Dec 13 07:17:12 compute-0 systemd[1]: libpod-721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015.scope: Deactivated successfully.
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.384546419 +0000 UTC m=+0.073348058 container died 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-de55810d77bccad7ebf53b9bea3abde3afe6a3d4ad7186527f67c49b505ebcc6-merged.mount: Deactivated successfully.
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.403290861 +0000 UTC m=+0.092092500 container remove 721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:17:12 compute-0 podman[98861]: 2025-12-13 07:17:12.328684469 +0000 UTC m=+0.017486128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:17:12 compute-0 systemd[1]: libpod-conmon-721d6069759e19019431bff10a600465c483c373417234e93963ebe5b4d21015.scope: Deactivated successfully.
Dec 13 07:17:12 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 13 07:17:12 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.517240806 +0000 UTC m=+0.029992542 container create 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:17:12 compute-0 systemd[1]: Started libpod-conmon-6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec.scope.
Dec 13 07:17:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.573369438 +0000 UTC m=+0.086121203 container init 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.578638665 +0000 UTC m=+0.091390411 container start 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.579874228 +0000 UTC m=+0.092625965 container attach 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.506245083 +0000 UTC m=+0.018996829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:17:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:12 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 13 07:17:12 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 13 07:17:12 compute-0 brave_wu[98911]: {
Dec 13 07:17:12 compute-0 brave_wu[98911]:     "0": [
Dec 13 07:17:12 compute-0 brave_wu[98911]:         {
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "devices": [
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "/dev/loop3"
Dec 13 07:17:12 compute-0 brave_wu[98911]:             ],
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_name": "ceph_lv0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_size": "21470642176",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "name": "ceph_lv0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "tags": {
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cluster_name": "ceph",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.crush_device_class": "",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.encrypted": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.objectstore": "bluestore",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osd_id": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.type": "block",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.vdo": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.with_tpm": "0"
Dec 13 07:17:12 compute-0 brave_wu[98911]:             },
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "type": "block",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "vg_name": "ceph_vg0"
Dec 13 07:17:12 compute-0 brave_wu[98911]:         }
Dec 13 07:17:12 compute-0 brave_wu[98911]:     ],
Dec 13 07:17:12 compute-0 brave_wu[98911]:     "1": [
Dec 13 07:17:12 compute-0 brave_wu[98911]:         {
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "devices": [
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "/dev/loop4"
Dec 13 07:17:12 compute-0 brave_wu[98911]:             ],
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_name": "ceph_lv1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_size": "21470642176",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "name": "ceph_lv1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "tags": {
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cluster_name": "ceph",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.crush_device_class": "",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.encrypted": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.objectstore": "bluestore",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osd_id": "1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.type": "block",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.vdo": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.with_tpm": "0"
Dec 13 07:17:12 compute-0 brave_wu[98911]:             },
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "type": "block",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "vg_name": "ceph_vg1"
Dec 13 07:17:12 compute-0 brave_wu[98911]:         }
Dec 13 07:17:12 compute-0 brave_wu[98911]:     ],
Dec 13 07:17:12 compute-0 brave_wu[98911]:     "2": [
Dec 13 07:17:12 compute-0 brave_wu[98911]:         {
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "devices": [
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "/dev/loop5"
Dec 13 07:17:12 compute-0 brave_wu[98911]:             ],
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_name": "ceph_lv2",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_size": "21470642176",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "name": "ceph_lv2",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "tags": {
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.cluster_name": "ceph",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.crush_device_class": "",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.encrypted": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.objectstore": "bluestore",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osd_id": "2",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.type": "block",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.vdo": "0",
Dec 13 07:17:12 compute-0 brave_wu[98911]:                 "ceph.with_tpm": "0"
Dec 13 07:17:12 compute-0 brave_wu[98911]:             },
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "type": "block",
Dec 13 07:17:12 compute-0 brave_wu[98911]:             "vg_name": "ceph_vg2"
Dec 13 07:17:12 compute-0 brave_wu[98911]:         }
Dec 13 07:17:12 compute-0 brave_wu[98911]:     ]
Dec 13 07:17:12 compute-0 brave_wu[98911]: }
Dec 13 07:17:12 compute-0 systemd[1]: libpod-6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec.scope: Deactivated successfully.
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.818921017 +0000 UTC m=+0.331672753 container died 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87436af24b3a588628e95f8f214614f5784d8fc8e22d2115510ff1b75565949-merged.mount: Deactivated successfully.
Dec 13 07:17:12 compute-0 podman[98898]: 2025-12-13 07:17:12.841909901 +0000 UTC m=+0.354661637 container remove 6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:17:12 compute-0 systemd[1]: libpod-conmon-6a139bcc319cb51197f316e27764ad9ccec42a6ea33292fdf34fc2aff00feeec.scope: Deactivated successfully.
Dec 13 07:17:12 compute-0 sudo[98826]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:12 compute-0 sudo[98930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:17:12 compute-0 sudo[98930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:12 compute-0 sudo[98930]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:12 compute-0 sudo[98955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:17:12 compute-0 sudo[98955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:13 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 13 07:17:13 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.180459182 +0000 UTC m=+0.026249205 container create a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:17:13 compute-0 systemd[1]: Started libpod-conmon-a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb.scope.
Dec 13 07:17:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.225046352 +0000 UTC m=+0.070836376 container init a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.22936882 +0000 UTC m=+0.075158832 container start a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.230426057 +0000 UTC m=+0.076216071 container attach a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 07:17:13 compute-0 musing_villani[99003]: 167 167
Dec 13 07:17:13 compute-0 systemd[1]: libpod-a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb.scope: Deactivated successfully.
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.233178142 +0000 UTC m=+0.078968155 container died a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 07:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fd46416ec748a196ab7a4f1d585b16d150671b63be6524a8ecdb2b284fe7a2e-merged.mount: Deactivated successfully.
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.250418656 +0000 UTC m=+0.096208670 container remove a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:17:13 compute-0 podman[98990]: 2025-12-13 07:17:13.169844854 +0000 UTC m=+0.015634888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:17:13 compute-0 systemd[1]: libpod-conmon-a4f680c881e6a2ed54c6a77993f3738bbd406929b7c277f74a9ce22df8005edb.scope: Deactivated successfully.
Dec 13 07:17:13 compute-0 ceph-mon[74928]: 3.8 scrub ok
Dec 13 07:17:13 compute-0 ceph-mon[74928]: 5.5 scrub starts
Dec 13 07:17:13 compute-0 ceph-mon[74928]: 5.5 scrub ok
Dec 13 07:17:13 compute-0 ceph-mon[74928]: pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Dec 13 07:17:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 07:17:13 compute-0 ceph-mon[74928]: osdmap e103: 3 total, 3 up, 3 in
Dec 13 07:17:13 compute-0 ceph-mon[74928]: 7.e scrub starts
Dec 13 07:17:13 compute-0 ceph-mon[74928]: 7.e scrub ok
Dec 13 07:17:13 compute-0 podman[99024]: 2025-12-13 07:17:13.366768695 +0000 UTC m=+0.025683941 container create 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:17:13 compute-0 systemd[1]: Started libpod-conmon-3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba.scope.
Dec 13 07:17:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:17:13 compute-0 podman[99024]: 2025-12-13 07:17:13.423397005 +0000 UTC m=+0.082312271 container init 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:17:13 compute-0 podman[99024]: 2025-12-13 07:17:13.428992797 +0000 UTC m=+0.087908043 container start 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:17:13 compute-0 podman[99024]: 2025-12-13 07:17:13.430047179 +0000 UTC m=+0.088962425 container attach 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:17:13 compute-0 podman[99024]: 2025-12-13 07:17:13.356611246 +0000 UTC m=+0.015526513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:17:13 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 13 07:17:13 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 13 07:17:13 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175204277s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 active pruub 190.297714233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:13 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:13 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:13 compute-0 lvm[99112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:17:13 compute-0 lvm[99112]: VG ceph_vg0 finished
Dec 13 07:17:13 compute-0 lvm[99115]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:17:13 compute-0 lvm[99115]: VG ceph_vg1 finished
Dec 13 07:17:13 compute-0 lvm[99118]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:17:13 compute-0 lvm[99118]: VG ceph_vg2 finished
Dec 13 07:17:13 compute-0 lvm[99119]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:17:13 compute-0 lvm[99119]: VG ceph_vg1 finished
Dec 13 07:17:13 compute-0 hardcore_euler[99037]: {}
Dec 13 07:17:14 compute-0 systemd[1]: libpod-3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba.scope: Deactivated successfully.
Dec 13 07:17:14 compute-0 podman[99024]: 2025-12-13 07:17:14.015306698 +0000 UTC m=+0.674221944 container died 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:17:14 compute-0 lvm[99121]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:17:14 compute-0 lvm[99121]: VG ceph_vg1 finished
Dec 13 07:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-48e8f0278a1c08fcc1b4bbc16dcdd80a51cc13411f0ee3108bb2bc24883ea81a-merged.mount: Deactivated successfully.
Dec 13 07:17:14 compute-0 podman[99024]: 2025-12-13 07:17:14.039671598 +0000 UTC m=+0.698586844 container remove 3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:17:14 compute-0 systemd[1]: libpod-conmon-3fdb03f95a345b4c3d7a6696a5c6dd8e5750794cf82a83723d3d8c79e647faba.scope: Deactivated successfully.
Dec 13 07:17:14 compute-0 sudo[98955]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:17:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:17:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:17:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:17:14 compute-0 sudo[99132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:17:14 compute-0 sudo[99132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:17:14 compute-0 sudo[99132]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Dec 13 07:17:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 13 07:17:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 07:17:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 13 07:17:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 07:17:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 13 07:17:14 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 13 07:17:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:14 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:14 compute-0 ceph-mon[74928]: 4.14 scrub starts
Dec 13 07:17:14 compute-0 ceph-mon[74928]: 4.14 scrub ok
Dec 13 07:17:14 compute-0 ceph-mon[74928]: 2.1c scrub starts
Dec 13 07:17:14 compute-0 ceph-mon[74928]: 2.1c scrub ok
Dec 13 07:17:14 compute-0 ceph-mon[74928]: 6.f scrub starts
Dec 13 07:17:14 compute-0 ceph-mon[74928]: 6.f scrub ok
Dec 13 07:17:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:17:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:17:14 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 07:17:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:14 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:14 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 13 07:17:14 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 13 07:17:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 13 07:17:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 13 07:17:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 13 07:17:15 compute-0 ceph-mon[74928]: pgmap v202: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Dec 13 07:17:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 07:17:15 compute-0 ceph-mon[74928]: osdmap e104: 3 total, 3 up, 3 in
Dec 13 07:17:15 compute-0 ceph-mon[74928]: 7.c scrub starts
Dec 13 07:17:15 compute-0 ceph-mon[74928]: 7.c scrub ok
Dec 13 07:17:15 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 13 07:17:15 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 13 07:17:15 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 13 07:17:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 07:17:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 13 07:17:16 compute-0 ceph-mon[74928]: 7.6 scrub starts
Dec 13 07:17:16 compute-0 ceph-mon[74928]: 7.6 scrub ok
Dec 13 07:17:16 compute-0 ceph-mon[74928]: osdmap e105: 3 total, 3 up, 3 in
Dec 13 07:17:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 07:17:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 07:17:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 13 07:17:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:16 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.640290260s) [0] async=[0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 active pruub 193.476989746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:16 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385339737s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 active pruub 189.222137451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:16 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:16 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:16 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 13 07:17:16 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 13 07:17:17 compute-0 ceph-mon[74928]: pgmap v205: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 07:17:17 compute-0 ceph-mon[74928]: osdmap e106: 3 total, 3 up, 3 in
Dec 13 07:17:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 13 07:17:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:17 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:17 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 13 07:17:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:17 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 13 07:17:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 13 07:17:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Dec 13 07:17:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 13 07:17:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 13 07:17:18 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 13 07:17:18 compute-0 ceph-mon[74928]: osdmap e107: 3 total, 3 up, 3 in
Dec 13 07:17:18 compute-0 ceph-mon[74928]: 3.5 scrub starts
Dec 13 07:17:18 compute-0 ceph-mon[74928]: 3.5 scrub ok
Dec 13 07:17:18 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 13 07:17:19 compute-0 ceph-mon[74928]: pgmap v208: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Dec 13 07:17:19 compute-0 ceph-mon[74928]: osdmap e108: 3 total, 3 up, 3 in
Dec 13 07:17:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 13 07:17:19 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 13 07:17:19 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:19 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:19 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559910774s) [0] async=[0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 active pruub 196.413848877s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:19 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:19 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 13 07:17:19 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 13 07:17:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/251 objects misplaced (2.390%); 104 B/s, 2 objects/s recovering
Dec 13 07:17:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 13 07:17:20 compute-0 ceph-mon[74928]: osdmap e109: 3 total, 3 up, 3 in
Dec 13 07:17:20 compute-0 ceph-mon[74928]: 4.1a scrub starts
Dec 13 07:17:20 compute-0 ceph-mon[74928]: 4.1a scrub ok
Dec 13 07:17:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 13 07:17:20 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 13 07:17:20 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:20 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 13 07:17:20 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 13 07:17:21 compute-0 ceph-mon[74928]: pgmap v211: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/251 objects misplaced (2.390%); 104 B/s, 2 objects/s recovering
Dec 13 07:17:21 compute-0 ceph-mon[74928]: osdmap e110: 3 total, 3 up, 3 in
Dec 13 07:17:21 compute-0 ceph-mon[74928]: 4.1b scrub starts
Dec 13 07:17:21 compute-0 ceph-mon[74928]: 4.1b scrub ok
Dec 13 07:17:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 208 B/s wr, 10 op/s; 6/251 objects misplaced (2.390%); 30 B/s, 1 objects/s recovering
Dec 13 07:17:22 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 13 07:17:22 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 13 07:17:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:22 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 13 07:17:22 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 13 07:17:23 compute-0 ceph-mon[74928]: pgmap v213: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 208 B/s wr, 10 op/s; 6/251 objects misplaced (2.390%); 30 B/s, 1 objects/s recovering
Dec 13 07:17:23 compute-0 ceph-mon[74928]: 4.18 scrub starts
Dec 13 07:17:23 compute-0 ceph-mon[74928]: 4.18 scrub ok
Dec 13 07:17:23 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 13 07:17:24 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 13 07:17:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 8 op/s; 50 B/s, 1 objects/s recovering
Dec 13 07:17:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 07:17:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:17:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 13 07:17:24 compute-0 ceph-mon[74928]: 5.4 scrub starts
Dec 13 07:17:24 compute-0 ceph-mon[74928]: 5.4 scrub ok
Dec 13 07:17:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 07:17:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:17:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 13 07:17:24 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 13 07:17:24 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319065094s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 active pruub 200.222534180s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:24 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:24 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 13 07:17:25 compute-0 ceph-mon[74928]: 7.9 scrub starts
Dec 13 07:17:25 compute-0 ceph-mon[74928]: 7.9 scrub ok
Dec 13 07:17:25 compute-0 ceph-mon[74928]: pgmap v214: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 8 op/s; 50 B/s, 1 objects/s recovering
Dec 13 07:17:25 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 07:17:25 compute-0 ceph-mon[74928]: osdmap e111: 3 total, 3 up, 3 in
Dec 13 07:17:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 13 07:17:25 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 13 07:17:25 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:25 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:25 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 13 07:17:25 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 13 07:17:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 13 07:17:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 13 07:17:26 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 13 07:17:26 compute-0 ceph-mon[74928]: osdmap e112: 3 total, 3 up, 3 in
Dec 13 07:17:26 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 13 07:17:26 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 13 07:17:26 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 13 07:17:26 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:26 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 13 07:17:27 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 13 07:17:27 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 13 07:17:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 13 07:17:27 compute-0 ceph-mon[74928]: 2.15 scrub starts
Dec 13 07:17:27 compute-0 ceph-mon[74928]: 2.15 scrub ok
Dec 13 07:17:27 compute-0 ceph-mon[74928]: pgmap v217: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:27 compute-0 ceph-mon[74928]: osdmap e113: 3 total, 3 up, 3 in
Dec 13 07:17:27 compute-0 ceph-mon[74928]: 3.1e scrub starts
Dec 13 07:17:27 compute-0 ceph-mon[74928]: 3.1e scrub ok
Dec 13 07:17:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 13 07:17:27 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 13 07:17:27 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336898804s) [1] async=[1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 active pruub 204.257400513s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:27 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:17:27 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:17:27 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:17:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:28 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 13 07:17:28 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 13 07:17:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 13 07:17:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 13 07:17:28 compute-0 ceph-mon[74928]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 13 07:17:28 compute-0 ceph-mon[74928]: 5.12 scrub starts
Dec 13 07:17:28 compute-0 ceph-mon[74928]: 5.12 scrub ok
Dec 13 07:17:28 compute-0 ceph-mon[74928]: 3.3 scrub starts
Dec 13 07:17:28 compute-0 ceph-mon[74928]: 3.3 scrub ok
Dec 13 07:17:28 compute-0 ceph-mon[74928]: osdmap e114: 3 total, 3 up, 3 in
Dec 13 07:17:28 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:17:28 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 13 07:17:28 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 13 07:17:28 compute-0 sshd-session[99157]: Accepted publickey for zuul from 192.168.122.30 port 59634 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:17:28 compute-0 systemd-logind[745]: New session 36 of user zuul.
Dec 13 07:17:28 compute-0 systemd[1]: Started Session 36 of User zuul.
Dec 13 07:17:28 compute-0 sshd-session[99157]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:17:29 compute-0 ceph-mon[74928]: 5.7 scrub starts
Dec 13 07:17:29 compute-0 ceph-mon[74928]: 5.7 scrub ok
Dec 13 07:17:29 compute-0 ceph-mon[74928]: pgmap v220: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:29 compute-0 ceph-mon[74928]: osdmap e115: 3 total, 3 up, 3 in
Dec 13 07:17:29 compute-0 ceph-mon[74928]: 4.12 scrub starts
Dec 13 07:17:29 compute-0 python3.9[99310]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:17:29 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 13 07:17:29 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 13 07:17:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:30 compute-0 ceph-mon[74928]: 4.12 scrub ok
Dec 13 07:17:30 compute-0 sudo[99526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbaoxuyhkfybncjuuhugbldalabzbmgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610250.2867537-32-245506958489136/AnsiballZ_command.py'
Dec 13 07:17:30 compute-0 sudo[99526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:17:30 compute-0 python3.9[99528]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:17:31 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 13 07:17:31 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 13 07:17:31 compute-0 ceph-mon[74928]: 7.1a scrub starts
Dec 13 07:17:31 compute-0 ceph-mon[74928]: 7.1a scrub ok
Dec 13 07:17:31 compute-0 ceph-mon[74928]: pgmap v222: 321 pgs: 1 unknown, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 13 07:17:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 13 07:17:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Dec 13 07:17:32 compute-0 ceph-mon[74928]: 3.1 scrub starts
Dec 13 07:17:32 compute-0 ceph-mon[74928]: 3.1 scrub ok
Dec 13 07:17:32 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 13 07:17:32 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 13 07:17:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 13 07:17:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 13 07:17:33 compute-0 ceph-mon[74928]: 3.c scrub starts
Dec 13 07:17:33 compute-0 ceph-mon[74928]: 3.c scrub ok
Dec 13 07:17:33 compute-0 ceph-mon[74928]: pgmap v223: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Dec 13 07:17:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 261 B/s wr, 6 op/s; 42 B/s, 1 objects/s recovering
Dec 13 07:17:34 compute-0 ceph-mon[74928]: 6.8 scrub starts
Dec 13 07:17:34 compute-0 ceph-mon[74928]: 6.8 scrub ok
Dec 13 07:17:34 compute-0 ceph-mon[74928]: 7.4 scrub starts
Dec 13 07:17:34 compute-0 ceph-mon[74928]: 7.4 scrub ok
Dec 13 07:17:34 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 13 07:17:34 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 13 07:17:35 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 13 07:17:35 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 13 07:17:35 compute-0 ceph-mon[74928]: pgmap v224: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 261 B/s wr, 6 op/s; 42 B/s, 1 objects/s recovering
Dec 13 07:17:35 compute-0 ceph-mon[74928]: 10.1b scrub starts
Dec 13 07:17:36 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 13 07:17:36 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 13 07:17:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 231 B/s wr, 5 op/s; 37 B/s, 1 objects/s recovering
Dec 13 07:17:36 compute-0 ceph-mon[74928]: 10.1b scrub ok
Dec 13 07:17:36 compute-0 ceph-mon[74928]: 3.f scrub starts
Dec 13 07:17:36 compute-0 ceph-mon[74928]: 3.f scrub ok
Dec 13 07:17:36 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 13 07:17:36 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 13 07:17:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 13 07:17:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 13 07:17:36 compute-0 sudo[99526]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:37 compute-0 sshd-session[99160]: Connection closed by 192.168.122.30 port 59634
Dec 13 07:17:37 compute-0 sshd-session[99157]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:17:37 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Dec 13 07:17:37 compute-0 systemd[1]: session-36.scope: Consumed 6.449s CPU time.
Dec 13 07:17:37 compute-0 systemd-logind[745]: Session 36 logged out. Waiting for processes to exit.
Dec 13 07:17:37 compute-0 systemd-logind[745]: Removed session 36.
Dec 13 07:17:37 compute-0 ceph-mon[74928]: 3.1b scrub starts
Dec 13 07:17:37 compute-0 ceph-mon[74928]: 3.1b scrub ok
Dec 13 07:17:37 compute-0 ceph-mon[74928]: pgmap v225: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 231 B/s wr, 5 op/s; 37 B/s, 1 objects/s recovering
Dec 13 07:17:37 compute-0 ceph-mon[74928]: 2.17 scrub starts
Dec 13 07:17:37 compute-0 ceph-mon[74928]: 2.17 scrub ok
Dec 13 07:17:37 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 13 07:17:37 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 13 07:17:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:17:38
Dec 13 07:17:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:17:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:17:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Dec 13 07:17:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:17:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 204 B/s wr, 5 op/s; 32 B/s, 1 objects/s recovering
Dec 13 07:17:38 compute-0 ceph-mon[74928]: 10.a scrub starts
Dec 13 07:17:38 compute-0 ceph-mon[74928]: 10.a scrub ok
Dec 13 07:17:38 compute-0 ceph-mon[74928]: 5.13 scrub starts
Dec 13 07:17:38 compute-0 ceph-mon[74928]: 5.13 scrub ok
Dec 13 07:17:38 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 13 07:17:38 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 13 07:17:38 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 13 07:17:38 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:17:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:17:39 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 13 07:17:39 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 13 07:17:39 compute-0 ceph-mon[74928]: pgmap v226: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 204 B/s wr, 5 op/s; 32 B/s, 1 objects/s recovering
Dec 13 07:17:39 compute-0 ceph-mon[74928]: 10.1f scrub starts
Dec 13 07:17:39 compute-0 ceph-mon[74928]: 5.11 scrub starts
Dec 13 07:17:39 compute-0 ceph-mon[74928]: 5.11 scrub ok
Dec 13 07:17:39 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 13 07:17:39 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 13 07:17:39 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 13 07:17:39 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 13 07:17:40 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 13 07:17:40 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 13 07:17:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 173 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Dec 13 07:17:40 compute-0 ceph-mon[74928]: 10.1f scrub ok
Dec 13 07:17:40 compute-0 ceph-mon[74928]: 7.1f scrub starts
Dec 13 07:17:40 compute-0 ceph-mon[74928]: 7.1f scrub ok
Dec 13 07:17:41 compute-0 ceph-mon[74928]: 10.1d scrub starts
Dec 13 07:17:41 compute-0 ceph-mon[74928]: 10.1d scrub ok
Dec 13 07:17:41 compute-0 ceph-mon[74928]: 6.c scrub starts
Dec 13 07:17:41 compute-0 ceph-mon[74928]: 6.c scrub ok
Dec 13 07:17:41 compute-0 ceph-mon[74928]: 2.18 scrub starts
Dec 13 07:17:41 compute-0 ceph-mon[74928]: 2.18 scrub ok
Dec 13 07:17:41 compute-0 ceph-mon[74928]: pgmap v227: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 173 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Dec 13 07:17:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Dec 13 07:17:42 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 13 07:17:42 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 13 07:17:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:43 compute-0 ceph-mon[74928]: pgmap v228: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Dec 13 07:17:43 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 13 07:17:43 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 13 07:17:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:44 compute-0 ceph-mon[74928]: 10.1c scrub starts
Dec 13 07:17:44 compute-0 ceph-mon[74928]: 10.1c scrub ok
Dec 13 07:17:45 compute-0 ceph-mon[74928]: 10.18 scrub starts
Dec 13 07:17:45 compute-0 ceph-mon[74928]: 10.18 scrub ok
Dec 13 07:17:45 compute-0 ceph-mon[74928]: pgmap v229: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 13 07:17:45 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 13 07:17:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:46 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 13 07:17:46 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 13 07:17:47 compute-0 ceph-mon[74928]: 10.5 scrub starts
Dec 13 07:17:47 compute-0 ceph-mon[74928]: 10.5 scrub ok
Dec 13 07:17:47 compute-0 ceph-mon[74928]: pgmap v230: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:47 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 13 07:17:47 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 13 07:17:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:17:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:17:48 compute-0 ceph-mon[74928]: 10.c scrub starts
Dec 13 07:17:48 compute-0 ceph-mon[74928]: 10.c scrub ok
Dec 13 07:17:49 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 13 07:17:49 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 13 07:17:49 compute-0 ceph-mon[74928]: 10.0 scrub starts
Dec 13 07:17:49 compute-0 ceph-mon[74928]: 10.0 scrub ok
Dec 13 07:17:49 compute-0 ceph-mon[74928]: pgmap v231: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:49 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 13 07:17:49 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 13 07:17:49 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 13 07:17:49 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 13 07:17:50 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 13 07:17:50 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 13 07:17:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:50 compute-0 ceph-mon[74928]: 7.18 scrub starts
Dec 13 07:17:50 compute-0 ceph-mon[74928]: 7.18 scrub ok
Dec 13 07:17:51 compute-0 ceph-mon[74928]: 10.3 scrub starts
Dec 13 07:17:51 compute-0 ceph-mon[74928]: 10.3 scrub ok
Dec 13 07:17:51 compute-0 ceph-mon[74928]: 6.1d scrub starts
Dec 13 07:17:51 compute-0 ceph-mon[74928]: 6.1d scrub ok
Dec 13 07:17:51 compute-0 ceph-mon[74928]: 3.9 scrub starts
Dec 13 07:17:51 compute-0 ceph-mon[74928]: 3.9 scrub ok
Dec 13 07:17:51 compute-0 ceph-mon[74928]: pgmap v232: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:52 compute-0 sshd-session[99585]: Accepted publickey for zuul from 192.168.122.30 port 45480 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:17:52 compute-0 systemd-logind[745]: New session 37 of user zuul.
Dec 13 07:17:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 13 07:17:52 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 13 07:17:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 13 07:17:52 compute-0 sshd-session[99585]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:17:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:53 compute-0 python3.9[99738]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 07:17:53 compute-0 ceph-mon[74928]: pgmap v233: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:53 compute-0 ceph-mon[74928]: 11.b scrub starts
Dec 13 07:17:53 compute-0 ceph-mon[74928]: 11.b scrub ok
Dec 13 07:17:53 compute-0 python3.9[99912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:17:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:54 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 13 07:17:54 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 13 07:17:54 compute-0 sudo[100066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkujttioobyptrnzfpfmvafadftmaapy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610274.2907999-45-77787977118887/AnsiballZ_command.py'
Dec 13 07:17:54 compute-0 sudo[100066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:17:54 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 13 07:17:54 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 13 07:17:54 compute-0 python3.9[100068]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:17:54 compute-0 sudo[100066]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:55 compute-0 sudo[100219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thrvaasnflodwtsaioysgfxyheyevysn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610274.9795058-57-255504515574961/AnsiballZ_stat.py'
Dec 13 07:17:55 compute-0 sudo[100219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:17:55 compute-0 python3.9[100221]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:17:55 compute-0 ceph-mon[74928]: pgmap v234: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:55 compute-0 ceph-mon[74928]: 8.12 scrub starts
Dec 13 07:17:55 compute-0 ceph-mon[74928]: 8.12 scrub ok
Dec 13 07:17:55 compute-0 ceph-mon[74928]: 6.1c scrub starts
Dec 13 07:17:55 compute-0 ceph-mon[74928]: 6.1c scrub ok
Dec 13 07:17:55 compute-0 sudo[100219]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:55 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 13 07:17:55 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 13 07:17:55 compute-0 sudo[100373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suhzuogdbhvjsodwabiqwbfpstvxarzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610275.6930318-68-107974028224506/AnsiballZ_file.py'
Dec 13 07:17:55 compute-0 sudo[100373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:17:56 compute-0 python3.9[100375]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:17:56 compute-0 sudo[100373]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:56 compute-0 sudo[100525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiuftkekakusbpmircowwgvennkhuufv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610276.3021383-77-145222151108884/AnsiballZ_file.py'
Dec 13 07:17:56 compute-0 sudo[100525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:17:56 compute-0 ceph-mon[74928]: 2.1b scrub starts
Dec 13 07:17:56 compute-0 ceph-mon[74928]: 2.1b scrub ok
Dec 13 07:17:56 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 13 07:17:56 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 13 07:17:56 compute-0 python3.9[100527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:17:56 compute-0 sudo[100525]: pam_unix(sudo:session): session closed for user root
Dec 13 07:17:57 compute-0 python3.9[100677]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:17:57 compute-0 network[100694]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:17:57 compute-0 network[100695]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:17:57 compute-0 network[100696]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:17:57 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 13 07:17:57 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 13 07:17:57 compute-0 ceph-mon[74928]: pgmap v235: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:57 compute-0 ceph-mon[74928]: 6.e scrub starts
Dec 13 07:17:57 compute-0 ceph-mon[74928]: 6.e scrub ok
Dec 13 07:17:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:17:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 13 07:17:58 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 13 07:17:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 13 07:17:58 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 13 07:17:58 compute-0 ceph-mon[74928]: 8.1d scrub starts
Dec 13 07:17:58 compute-0 ceph-mon[74928]: 8.1d scrub ok
Dec 13 07:17:59 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 13 07:17:59 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 13 07:17:59 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 13 07:17:59 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 13 07:17:59 compute-0 ceph-mon[74928]: pgmap v236: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:17:59 compute-0 ceph-mon[74928]: 2.a scrub starts
Dec 13 07:17:59 compute-0 ceph-mon[74928]: 11.11 scrub starts
Dec 13 07:17:59 compute-0 ceph-mon[74928]: 2.a scrub ok
Dec 13 07:17:59 compute-0 ceph-mon[74928]: 11.11 scrub ok
Dec 13 07:18:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:00 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 13 07:18:00 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 13 07:18:00 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 13 07:18:00 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 13 07:18:00 compute-0 ceph-mon[74928]: 8.14 scrub starts
Dec 13 07:18:00 compute-0 ceph-mon[74928]: 8.14 scrub ok
Dec 13 07:18:00 compute-0 ceph-mon[74928]: 8.1c scrub starts
Dec 13 07:18:00 compute-0 ceph-mon[74928]: 8.1c scrub ok
Dec 13 07:18:00 compute-0 python3.9[100956]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:18:01 compute-0 python3.9[101106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:18:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 13 07:18:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 13 07:18:01 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 13 07:18:01 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 13 07:18:01 compute-0 ceph-mon[74928]: pgmap v237: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:01 compute-0 ceph-mon[74928]: 11.1e scrub starts
Dec 13 07:18:01 compute-0 ceph-mon[74928]: 11.1e scrub ok
Dec 13 07:18:01 compute-0 ceph-mon[74928]: 2.3 scrub starts
Dec 13 07:18:01 compute-0 ceph-mon[74928]: 2.3 scrub ok
Dec 13 07:18:02 compute-0 python3.9[101260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:18:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:02 compute-0 sudo[101416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpjuxmtxlfklqdiarudevtaemqhdznb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610282.3181968-125-215364423398981/AnsiballZ_setup.py'
Dec 13 07:18:02 compute-0 sudo[101416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:02 compute-0 ceph-mon[74928]: 11.17 scrub starts
Dec 13 07:18:02 compute-0 ceph-mon[74928]: 11.17 scrub ok
Dec 13 07:18:02 compute-0 ceph-mon[74928]: 2.5 scrub starts
Dec 13 07:18:02 compute-0 ceph-mon[74928]: 2.5 scrub ok
Dec 13 07:18:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:02 compute-0 python3.9[101418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:18:02 compute-0 sudo[101416]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:03 compute-0 sudo[101500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caekkxsaokjrrufrejsqejsznnpivqhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610282.3181968-125-215364423398981/AnsiballZ_dnf.py'
Dec 13 07:18:03 compute-0 sudo[101500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:03 compute-0 python3.9[101502]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:18:03 compute-0 ceph-mon[74928]: pgmap v238: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:03 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 13 07:18:03 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 13 07:18:04 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 13 07:18:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:04 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 13 07:18:04 compute-0 ceph-mon[74928]: 11.1c scrub starts
Dec 13 07:18:04 compute-0 ceph-mon[74928]: 11.1c scrub ok
Dec 13 07:18:05 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 13 07:18:05 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 13 07:18:05 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 13 07:18:05 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 13 07:18:05 compute-0 ceph-mon[74928]: 8.1f scrub starts
Dec 13 07:18:05 compute-0 ceph-mon[74928]: pgmap v239: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:05 compute-0 ceph-mon[74928]: 8.1f scrub ok
Dec 13 07:18:05 compute-0 ceph-mon[74928]: 11.1b scrub starts
Dec 13 07:18:05 compute-0 ceph-mon[74928]: 11.1b scrub ok
Dec 13 07:18:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:06 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 13 07:18:06 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 13 07:18:06 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 13 07:18:06 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 13 07:18:06 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 13 07:18:06 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 13 07:18:06 compute-0 ceph-mon[74928]: 4.10 scrub starts
Dec 13 07:18:06 compute-0 ceph-mon[74928]: 4.10 scrub ok
Dec 13 07:18:06 compute-0 ceph-mon[74928]: 11.19 scrub starts
Dec 13 07:18:06 compute-0 ceph-mon[74928]: 11.19 scrub ok
Dec 13 07:18:06 compute-0 ceph-mon[74928]: 11.12 scrub starts
Dec 13 07:18:06 compute-0 ceph-mon[74928]: 11.12 scrub ok
Dec 13 07:18:07 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 13 07:18:07 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 13 07:18:07 compute-0 ceph-mon[74928]: pgmap v240: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:07 compute-0 ceph-mon[74928]: 2.4 scrub starts
Dec 13 07:18:07 compute-0 ceph-mon[74928]: 2.4 scrub ok
Dec 13 07:18:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:08 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 13 07:18:08 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 13 07:18:08 compute-0 ceph-mon[74928]: 2.7 scrub starts
Dec 13 07:18:08 compute-0 ceph-mon[74928]: 2.7 scrub ok
Dec 13 07:18:08 compute-0 ceph-mon[74928]: 8.1a scrub starts
Dec 13 07:18:08 compute-0 ceph-mon[74928]: 8.1a scrub ok
Dec 13 07:18:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:18:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:18:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:18:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:18:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:18:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:18:09 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 13 07:18:09 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 13 07:18:09 compute-0 ceph-mon[74928]: pgmap v241: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:09 compute-0 ceph-mon[74928]: 8.18 scrub starts
Dec 13 07:18:09 compute-0 ceph-mon[74928]: 8.18 scrub ok
Dec 13 07:18:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:10 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 13 07:18:10 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 13 07:18:10 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 13 07:18:10 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 13 07:18:10 compute-0 ceph-mon[74928]: 10.1e scrub starts
Dec 13 07:18:10 compute-0 ceph-mon[74928]: 10.1e scrub ok
Dec 13 07:18:11 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 13 07:18:11 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 13 07:18:11 compute-0 ceph-mon[74928]: pgmap v242: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:11 compute-0 ceph-mon[74928]: 2.6 scrub starts
Dec 13 07:18:11 compute-0 ceph-mon[74928]: 2.6 scrub ok
Dec 13 07:18:11 compute-0 ceph-mon[74928]: 11.1f scrub starts
Dec 13 07:18:11 compute-0 ceph-mon[74928]: 11.1f scrub ok
Dec 13 07:18:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:13 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 13 07:18:13 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 13 07:18:13 compute-0 ceph-mon[74928]: pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:13 compute-0 ceph-mon[74928]: 8.1b scrub starts
Dec 13 07:18:13 compute-0 ceph-mon[74928]: 8.1b scrub ok
Dec 13 07:18:14 compute-0 sudo[101575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:18:14 compute-0 sudo[101575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:14 compute-0 sudo[101575]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:14 compute-0 sudo[101600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:18:14 compute-0 sudo[101600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:14 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 13 07:18:14 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 13 07:18:14 compute-0 ceph-mon[74928]: 8.4 scrub starts
Dec 13 07:18:14 compute-0 ceph-mon[74928]: 8.4 scrub ok
Dec 13 07:18:14 compute-0 sudo[101600]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:18:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:18:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:18:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:18:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:18:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:18:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:18:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:18:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:18:14 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:18:14 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:18:14 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:18:14 compute-0 sudo[101654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:18:14 compute-0 sudo[101654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:14 compute-0 sudo[101654]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:14 compute-0 sudo[101679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:18:14 compute-0 sudo[101679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:14 compute-0 podman[101713]: 2025-12-13 07:18:14.92982564 +0000 UTC m=+0.026403177 container create a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:18:14 compute-0 systemd[1]: Started libpod-conmon-a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b.scope.
Dec 13 07:18:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:18:14 compute-0 podman[101713]: 2025-12-13 07:18:14.976885842 +0000 UTC m=+0.073463378 container init a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:18:14 compute-0 podman[101713]: 2025-12-13 07:18:14.981325012 +0000 UTC m=+0.077902549 container start a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:18:14 compute-0 podman[101713]: 2025-12-13 07:18:14.982398382 +0000 UTC m=+0.078975918 container attach a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:18:14 compute-0 goofy_thompson[101725]: 167 167
Dec 13 07:18:14 compute-0 systemd[1]: libpod-a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b.scope: Deactivated successfully.
Dec 13 07:18:14 compute-0 podman[101713]: 2025-12-13 07:18:14.986780804 +0000 UTC m=+0.083358342 container died a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4087827a87befefe8b62d9e4fe30061ffb9ba55e9f8cc51256119b588913f5b-merged.mount: Deactivated successfully.
Dec 13 07:18:15 compute-0 podman[101713]: 2025-12-13 07:18:15.007913208 +0000 UTC m=+0.104490745 container remove a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_thompson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:18:15 compute-0 podman[101713]: 2025-12-13 07:18:14.918634347 +0000 UTC m=+0.015211904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:18:15 compute-0 systemd[1]: libpod-conmon-a61e39debae75637337943f67cc2c269bef86d33dac2b27f81115e53ea3fb26b.scope: Deactivated successfully.
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.124046828 +0000 UTC m=+0.027929090 container create 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:18:15 compute-0 systemd[1]: Started libpod-conmon-2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a.scope.
Dec 13 07:18:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.181765166 +0000 UTC m=+0.085647449 container init 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.189040207 +0000 UTC m=+0.092922468 container start 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.189986726 +0000 UTC m=+0.093868988 container attach 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.113382442 +0000 UTC m=+0.017264724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:18:15 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 13 07:18:15 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 13 07:18:15 compute-0 beautiful_saha[101761]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:18:15 compute-0 beautiful_saha[101761]: --> All data devices are unavailable
Dec 13 07:18:15 compute-0 systemd[1]: libpod-2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a.scope: Deactivated successfully.
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.558108444 +0000 UTC m=+0.461990705 container died 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cdb8224432ad0883b841ad2d80c583b714ec93354de428c19a764da1f502d36-merged.mount: Deactivated successfully.
Dec 13 07:18:15 compute-0 ceph-mon[74928]: pgmap v244: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:18:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:18:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:18:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:18:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:18:15 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:18:15 compute-0 ceph-mon[74928]: 11.18 scrub starts
Dec 13 07:18:15 compute-0 ceph-mon[74928]: 11.18 scrub ok
Dec 13 07:18:15 compute-0 podman[101748]: 2025-12-13 07:18:15.583170736 +0000 UTC m=+0.487052997 container remove 2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 07:18:15 compute-0 systemd[1]: libpod-conmon-2def15fcea25daf4bcb8cb944bd814e421f217831a53eddf67ca9ed936ba460a.scope: Deactivated successfully.
Dec 13 07:18:15 compute-0 sudo[101679]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:15 compute-0 sudo[101791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:18:15 compute-0 sudo[101791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:15 compute-0 sudo[101791]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:15 compute-0 sudo[101816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:18:15 compute-0 sudo[101816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.916348544 +0000 UTC m=+0.032260719 container create fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:18:15 compute-0 systemd[1]: Started libpod-conmon-fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa.scope.
Dec 13 07:18:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.968006435 +0000 UTC m=+0.083918630 container init fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.973149316 +0000 UTC m=+0.089061492 container start fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.974379481 +0000 UTC m=+0.090291657 container attach fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:18:15 compute-0 dreamy_murdock[101865]: 167 167
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.975789275 +0000 UTC m=+0.091701451 container died fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:18:15 compute-0 systemd[1]: libpod-fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa.scope: Deactivated successfully.
Dec 13 07:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f982b0683540cf617b44a648a576920fda045816fe40783b5d3100b300e636b5-merged.mount: Deactivated successfully.
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.996068257 +0000 UTC m=+0.111980431 container remove fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_murdock, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 07:18:15 compute-0 podman[101851]: 2025-12-13 07:18:15.90584502 +0000 UTC m=+0.021757205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:18:16 compute-0 systemd[1]: libpod-conmon-fe7a2b32c9028df0efb508436dc14e72f1f79050ec8ada51485c784f0b9496aa.scope: Deactivated successfully.
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.108920726 +0000 UTC m=+0.027710328 container create c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:18:16 compute-0 systemd[1]: Started libpod-conmon-c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a.scope.
Dec 13 07:18:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.160856642 +0000 UTC m=+0.079646266 container init c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.166094553 +0000 UTC m=+0.084884156 container start c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.169199182 +0000 UTC m=+0.087988784 container attach c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.098142693 +0000 UTC m=+0.016932316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:18:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:16 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 13 07:18:16 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 13 07:18:16 compute-0 festive_sanderson[101900]: {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:     "0": [
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:         {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "devices": [
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "/dev/loop3"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             ],
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_name": "ceph_lv0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_size": "21470642176",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "name": "ceph_lv0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "tags": {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cluster_name": "ceph",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.crush_device_class": "",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.encrypted": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.objectstore": "bluestore",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osd_id": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.type": "block",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.vdo": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.with_tpm": "0"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             },
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "type": "block",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "vg_name": "ceph_vg0"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:         }
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:     ],
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:     "1": [
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:         {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "devices": [
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "/dev/loop4"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             ],
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_name": "ceph_lv1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_size": "21470642176",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "name": "ceph_lv1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "tags": {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cluster_name": "ceph",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.crush_device_class": "",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.encrypted": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.objectstore": "bluestore",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osd_id": "1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.type": "block",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.vdo": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.with_tpm": "0"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             },
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "type": "block",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "vg_name": "ceph_vg1"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:         }
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:     ],
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:     "2": [
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:         {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "devices": [
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "/dev/loop5"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             ],
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_name": "ceph_lv2",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_size": "21470642176",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "name": "ceph_lv2",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "tags": {
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.cluster_name": "ceph",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.crush_device_class": "",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.encrypted": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.objectstore": "bluestore",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osd_id": "2",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.type": "block",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.vdo": "0",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:                 "ceph.with_tpm": "0"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             },
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "type": "block",
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:             "vg_name": "ceph_vg2"
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:         }
Dec 13 07:18:16 compute-0 festive_sanderson[101900]:     ]
Dec 13 07:18:16 compute-0 festive_sanderson[101900]: }
Dec 13 07:18:16 compute-0 systemd[1]: libpod-c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a.scope: Deactivated successfully.
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.406789133 +0000 UTC m=+0.325578736 container died c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 07:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-96f18becab63c26534d0e988a01d6057a413f22a9c5283d43cc977bf9e92f7fa-merged.mount: Deactivated successfully.
Dec 13 07:18:16 compute-0 podman[101887]: 2025-12-13 07:18:16.42811896 +0000 UTC m=+0.346908563 container remove c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sanderson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:18:16 compute-0 systemd[1]: libpod-conmon-c6cbfb303e72f8df25c509765155d1d3738c032b8d92b3739bc4522d53ea7c1a.scope: Deactivated successfully.
Dec 13 07:18:16 compute-0 sudo[101816]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:16 compute-0 sudo[101920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:18:16 compute-0 sudo[101920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:16 compute-0 sudo[101920]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:16 compute-0 sudo[101945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:18:16 compute-0 sudo[101945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:16 compute-0 ceph-mon[74928]: 8.d scrub starts
Dec 13 07:18:16 compute-0 ceph-mon[74928]: 8.d scrub ok
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.764056335 +0000 UTC m=+0.026314910 container create 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:18:16 compute-0 systemd[1]: Started libpod-conmon-577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0.scope.
Dec 13 07:18:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.821644275 +0000 UTC m=+0.083902860 container init 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.826815109 +0000 UTC m=+0.089073674 container start 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.827777999 +0000 UTC m=+0.090036563 container attach 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:18:16 compute-0 zealous_turing[101992]: 167 167
Dec 13 07:18:16 compute-0 systemd[1]: libpod-577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0.scope: Deactivated successfully.
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.829374496 +0000 UTC m=+0.091633061 container died 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2fbe5c6fbb85f1cbf6f65b9ce91f1358e51bc2b2d5eeabfeb677cce6e785734-merged.mount: Deactivated successfully.
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.84533268 +0000 UTC m=+0.107591245 container remove 577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:18:16 compute-0 podman[101979]: 2025-12-13 07:18:16.753901521 +0000 UTC m=+0.016160106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:18:16 compute-0 systemd[1]: libpod-conmon-577dda61e8804ed7bd28c739e1bd78ab51ea2357605f3e81dba760e54fe73cb0.scope: Deactivated successfully.
Dec 13 07:18:16 compute-0 podman[102013]: 2025-12-13 07:18:16.953056664 +0000 UTC m=+0.025861754 container create 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:18:16 compute-0 systemd[1]: Started libpod-conmon-58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf.scope.
Dec 13 07:18:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:18:16 compute-0 podman[102013]: 2025-12-13 07:18:16.998316353 +0000 UTC m=+0.071121453 container init 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:18:17 compute-0 podman[102013]: 2025-12-13 07:18:17.006264286 +0000 UTC m=+0.079069384 container start 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:18:17 compute-0 podman[102013]: 2025-12-13 07:18:17.007390154 +0000 UTC m=+0.080195253 container attach 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:18:17 compute-0 podman[102013]: 2025-12-13 07:18:16.942968717 +0000 UTC m=+0.015773816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:18:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 13 07:18:17 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 13 07:18:17 compute-0 lvm[102104]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:18:17 compute-0 lvm[102103]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:18:17 compute-0 lvm[102104]: VG ceph_vg1 finished
Dec 13 07:18:17 compute-0 lvm[102103]: VG ceph_vg0 finished
Dec 13 07:18:17 compute-0 lvm[102107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:18:17 compute-0 lvm[102107]: VG ceph_vg2 finished
Dec 13 07:18:17 compute-0 ceph-mon[74928]: pgmap v245: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:17 compute-0 ceph-mon[74928]: 11.3 scrub starts
Dec 13 07:18:17 compute-0 ceph-mon[74928]: 11.3 scrub ok
Dec 13 07:18:17 compute-0 sleepy_sammet[102026]: {}
Dec 13 07:18:17 compute-0 systemd[1]: libpod-58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf.scope: Deactivated successfully.
Dec 13 07:18:17 compute-0 podman[102013]: 2025-12-13 07:18:17.623510539 +0000 UTC m=+0.696315627 container died 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 07:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-982a89a27ea56fa0ab758d913b42ffd7aa1aeb6d20eba840c04ce1d1e06cce73-merged.mount: Deactivated successfully.
Dec 13 07:18:17 compute-0 podman[102013]: 2025-12-13 07:18:17.648091891 +0000 UTC m=+0.720896990 container remove 58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sammet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:18:17 compute-0 systemd[1]: libpod-conmon-58a39ab197f7066601e2a5169f93687835bab978301decd31ff754723f2ac1cf.scope: Deactivated successfully.
Dec 13 07:18:17 compute-0 sudo[101945]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:18:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:18:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:18:17 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:18:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:17 compute-0 sudo[102118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:18:17 compute-0 sudo[102118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:18:17 compute-0 sudo[102118]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:18 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 13 07:18:18 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 13 07:18:18 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 13 07:18:18 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 13 07:18:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:18:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:18:18 compute-0 ceph-mon[74928]: pgmap v246: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:18 compute-0 ceph-mon[74928]: 11.d scrub starts
Dec 13 07:18:18 compute-0 ceph-mon[74928]: 11.d scrub ok
Dec 13 07:18:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 13 07:18:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 13 07:18:19 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 13 07:18:19 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 13 07:18:19 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 13 07:18:19 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 13 07:18:19 compute-0 ceph-mon[74928]: 2.9 scrub starts
Dec 13 07:18:19 compute-0 ceph-mon[74928]: 2.9 scrub ok
Dec 13 07:18:19 compute-0 ceph-mon[74928]: 11.2 scrub starts
Dec 13 07:18:19 compute-0 ceph-mon[74928]: 11.2 scrub ok
Dec 13 07:18:19 compute-0 ceph-mon[74928]: 11.1 scrub starts
Dec 13 07:18:19 compute-0 ceph-mon[74928]: 11.1 scrub ok
Dec 13 07:18:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:20 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 13 07:18:20 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 13 07:18:20 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 13 07:18:20 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 13 07:18:20 compute-0 ceph-mon[74928]: 5.f scrub starts
Dec 13 07:18:20 compute-0 ceph-mon[74928]: 5.f scrub ok
Dec 13 07:18:20 compute-0 ceph-mon[74928]: pgmap v247: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:20 compute-0 ceph-mon[74928]: 11.f scrub starts
Dec 13 07:18:20 compute-0 ceph-mon[74928]: 11.f scrub ok
Dec 13 07:18:21 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 13 07:18:21 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 13 07:18:21 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 13 07:18:21 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 13 07:18:21 compute-0 ceph-mon[74928]: 5.c scrub starts
Dec 13 07:18:21 compute-0 ceph-mon[74928]: 5.c scrub ok
Dec 13 07:18:21 compute-0 ceph-mon[74928]: 11.8 scrub starts
Dec 13 07:18:21 compute-0 ceph-mon[74928]: 11.8 scrub ok
Dec 13 07:18:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:22 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 13 07:18:22 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 13 07:18:22 compute-0 ceph-mon[74928]: 5.1 scrub starts
Dec 13 07:18:22 compute-0 ceph-mon[74928]: 5.1 scrub ok
Dec 13 07:18:22 compute-0 ceph-mon[74928]: pgmap v248: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:23 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 13 07:18:23 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 13 07:18:23 compute-0 ceph-mon[74928]: 5.1a scrub starts
Dec 13 07:18:23 compute-0 ceph-mon[74928]: 5.1a scrub ok
Dec 13 07:18:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:24 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 13 07:18:24 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 13 07:18:24 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 13 07:18:24 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 13 07:18:24 compute-0 ceph-mon[74928]: 5.18 scrub starts
Dec 13 07:18:24 compute-0 ceph-mon[74928]: 5.18 scrub ok
Dec 13 07:18:24 compute-0 ceph-mon[74928]: pgmap v249: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:24 compute-0 ceph-mon[74928]: 5.19 scrub starts
Dec 13 07:18:24 compute-0 ceph-mon[74928]: 5.19 scrub ok
Dec 13 07:18:24 compute-0 ceph-mon[74928]: 8.e scrub starts
Dec 13 07:18:24 compute-0 ceph-mon[74928]: 8.e scrub ok
Dec 13 07:18:25 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 13 07:18:25 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 13 07:18:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 13 07:18:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 13 07:18:25 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 13 07:18:25 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 13 07:18:25 compute-0 ceph-mon[74928]: 6.1e scrub starts
Dec 13 07:18:25 compute-0 ceph-mon[74928]: 6.1e scrub ok
Dec 13 07:18:25 compute-0 ceph-mon[74928]: 11.15 scrub starts
Dec 13 07:18:25 compute-0 ceph-mon[74928]: 11.15 scrub ok
Dec 13 07:18:25 compute-0 ceph-mon[74928]: 8.c scrub starts
Dec 13 07:18:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:26 compute-0 ceph-mon[74928]: 8.c scrub ok
Dec 13 07:18:26 compute-0 ceph-mon[74928]: pgmap v250: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:27 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 13 07:18:27 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 13 07:18:27 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 13 07:18:27 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 13 07:18:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:27 compute-0 ceph-mon[74928]: 8.15 scrub starts
Dec 13 07:18:27 compute-0 ceph-mon[74928]: 8.15 scrub ok
Dec 13 07:18:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:28 compute-0 ceph-mon[74928]: 11.6 scrub starts
Dec 13 07:18:28 compute-0 ceph-mon[74928]: 11.6 scrub ok
Dec 13 07:18:28 compute-0 ceph-mon[74928]: pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 13 07:18:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 13 07:18:29 compute-0 ceph-mon[74928]: 11.16 scrub starts
Dec 13 07:18:29 compute-0 ceph-mon[74928]: 11.16 scrub ok
Dec 13 07:18:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:30 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 13 07:18:30 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 13 07:18:30 compute-0 ceph-mon[74928]: pgmap v252: 321 pgs: 321 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:30 compute-0 ceph-mon[74928]: 8.16 scrub starts
Dec 13 07:18:30 compute-0 ceph-mon[74928]: 8.16 scrub ok
Dec 13 07:18:31 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 13 07:18:31 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 13 07:18:31 compute-0 ceph-mon[74928]: 8.17 scrub starts
Dec 13 07:18:31 compute-0 ceph-mon[74928]: 8.17 scrub ok
Dec 13 07:18:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:32 compute-0 ceph-mon[74928]: pgmap v253: 321 pgs: 321 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:33 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 13 07:18:33 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 13 07:18:33 compute-0 ceph-mon[74928]: 11.1a scrub starts
Dec 13 07:18:33 compute-0 ceph-mon[74928]: 11.1a scrub ok
Dec 13 07:18:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:34 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 13 07:18:34 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 13 07:18:34 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 13 07:18:34 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 13 07:18:34 compute-0 ceph-mon[74928]: pgmap v254: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:34 compute-0 ceph-mon[74928]: 11.13 scrub starts
Dec 13 07:18:34 compute-0 ceph-mon[74928]: 11.13 scrub ok
Dec 13 07:18:35 compute-0 ceph-mon[74928]: 10.7 scrub starts
Dec 13 07:18:35 compute-0 ceph-mon[74928]: 10.7 scrub ok
Dec 13 07:18:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 13 07:18:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 13 07:18:36 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 13 07:18:36 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 13 07:18:36 compute-0 ceph-mon[74928]: pgmap v255: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:36 compute-0 ceph-mon[74928]: 8.1 scrub starts
Dec 13 07:18:36 compute-0 ceph-mon[74928]: 8.1 scrub ok
Dec 13 07:18:37 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 13 07:18:37 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 13 07:18:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:37 compute-0 ceph-mon[74928]: 8.9 scrub starts
Dec 13 07:18:37 compute-0 ceph-mon[74928]: 8.9 scrub ok
Dec 13 07:18:37 compute-0 ceph-mon[74928]: 8.11 scrub starts
Dec 13 07:18:37 compute-0 ceph-mon[74928]: 8.11 scrub ok
Dec 13 07:18:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:18:38
Dec 13 07:18:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:18:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:18:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'backups', '.mgr']
Dec 13 07:18:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:18:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:38 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 13 07:18:38 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 13 07:18:38 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 13 07:18:38 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 13 07:18:38 compute-0 ceph-mon[74928]: pgmap v256: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:38 compute-0 ceph-mon[74928]: 11.9 scrub starts
Dec 13 07:18:38 compute-0 ceph-mon[74928]: 11.9 scrub ok
Dec 13 07:18:38 compute-0 ceph-mon[74928]: 11.14 scrub starts
Dec 13 07:18:38 compute-0 ceph-mon[74928]: 11.14 scrub ok
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:18:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:18:39 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 13 07:18:39 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 13 07:18:39 compute-0 ceph-mon[74928]: 8.2 scrub starts
Dec 13 07:18:39 compute-0 ceph-mon[74928]: 8.2 scrub ok
Dec 13 07:18:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:40 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 13 07:18:40 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 13 07:18:40 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 13 07:18:40 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 13 07:18:40 compute-0 ceph-mon[74928]: pgmap v257: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:40 compute-0 ceph-mon[74928]: 11.0 scrub starts
Dec 13 07:18:40 compute-0 ceph-mon[74928]: 11.0 scrub ok
Dec 13 07:18:40 compute-0 ceph-mon[74928]: 10.4 scrub starts
Dec 13 07:18:40 compute-0 ceph-mon[74928]: 10.4 scrub ok
Dec 13 07:18:41 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 13 07:18:41 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 13 07:18:41 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 13 07:18:41 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 13 07:18:41 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 13 07:18:41 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 13 07:18:41 compute-0 ceph-mon[74928]: 9.8 scrub starts
Dec 13 07:18:41 compute-0 ceph-mon[74928]: 9.8 scrub ok
Dec 13 07:18:41 compute-0 ceph-mon[74928]: 8.3 scrub starts
Dec 13 07:18:41 compute-0 ceph-mon[74928]: 8.3 scrub ok
Dec 13 07:18:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:42 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 13 07:18:42 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 13 07:18:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:42 compute-0 ceph-mon[74928]: 10.8 scrub starts
Dec 13 07:18:42 compute-0 ceph-mon[74928]: 10.8 scrub ok
Dec 13 07:18:42 compute-0 ceph-mon[74928]: pgmap v258: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:42 compute-0 ceph-mon[74928]: 8.8 scrub starts
Dec 13 07:18:42 compute-0 ceph-mon[74928]: 8.8 scrub ok
Dec 13 07:18:42 compute-0 sudo[101500]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:43 compute-0 sudo[102368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxqmjnznnueavuqvygrzzluvzynbhay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610322.986233-137-62239973948048/AnsiballZ_command.py'
Dec 13 07:18:43 compute-0 sudo[102368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:43 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 13 07:18:43 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 13 07:18:43 compute-0 python3.9[102370]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:18:43 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 13 07:18:43 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 13 07:18:43 compute-0 ceph-mon[74928]: 9.18 scrub starts
Dec 13 07:18:43 compute-0 ceph-mon[74928]: 9.18 scrub ok
Dec 13 07:18:43 compute-0 ceph-mon[74928]: 8.a scrub starts
Dec 13 07:18:43 compute-0 ceph-mon[74928]: 8.a scrub ok
Dec 13 07:18:43 compute-0 sudo[102368]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:44 compute-0 sudo[102655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyajumxcylstkadflbzviuewteyhxyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610323.9694002-145-1810406798059/AnsiballZ_selinux.py'
Dec 13 07:18:44 compute-0 sudo[102655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:44 compute-0 python3.9[102657]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 07:18:44 compute-0 sudo[102655]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:44 compute-0 ceph-mon[74928]: pgmap v259: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:45 compute-0 sudo[102807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqaprukgpcroplxiaihcpvlpnnisyucw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610324.9226408-156-234779711607691/AnsiballZ_command.py'
Dec 13 07:18:45 compute-0 sudo[102807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:45 compute-0 python3.9[102809]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 07:18:45 compute-0 sudo[102807]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:45 compute-0 sudo[102959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlacmoffqfdqqgovnubztbmlfxfbhfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610325.363485-164-74979317283929/AnsiballZ_file.py'
Dec 13 07:18:45 compute-0 sudo[102959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:45 compute-0 python3.9[102961]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:18:45 compute-0 sudo[102959]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:46 compute-0 sudo[103111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnufxpofponvrywhfcqtbcbkvquzxbli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610325.8370645-172-92725397253575/AnsiballZ_mount.py'
Dec 13 07:18:46 compute-0 sudo[103111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:46 compute-0 python3.9[103113]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 07:18:46 compute-0 sudo[103111]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:46 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 13 07:18:46 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 13 07:18:47 compute-0 sudo[103263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygadadptrpgruhikbzeefneoalumbnnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610326.9503338-200-4863402423098/AnsiballZ_file.py'
Dec 13 07:18:47 compute-0 sudo[103263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:47 compute-0 python3.9[103265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:18:47 compute-0 ceph-mon[74928]: pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:47 compute-0 sudo[103263]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:47 compute-0 sudo[103415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awmbtmslhcfgmwwfnrptovdeirwrdrkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610327.4422677-208-81169415571031/AnsiballZ_stat.py'
Dec 13 07:18:47 compute-0 sudo[103415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:47 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 13 07:18:47 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 13 07:18:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:47 compute-0 python3.9[103417]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:18:47 compute-0 sudo[103415]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:47 compute-0 sudo[103493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwnyltymypqxjteiflikvfafbbbpszmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610327.4422677-208-81169415571031/AnsiballZ_file.py'
Dec 13 07:18:47 compute-0 sudo[103493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:48 compute-0 python3.9[103495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:18:48 compute-0 sudo[103493]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 13 07:18:48 compute-0 ceph-mon[74928]: 11.4 scrub starts
Dec 13 07:18:48 compute-0 ceph-mon[74928]: 11.4 scrub ok
Dec 13 07:18:48 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:18:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:18:48 compute-0 sudo[103645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvbvbmodpofrweclunxnpbglrqyalel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610328.5120637-229-198265171279391/AnsiballZ_stat.py'
Dec 13 07:18:48 compute-0 sudo[103645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:48 compute-0 python3.9[103647]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:18:48 compute-0 sudo[103645]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:49 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 13 07:18:49 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 13 07:18:49 compute-0 ceph-mon[74928]: 10.17 scrub starts
Dec 13 07:18:49 compute-0 ceph-mon[74928]: 10.17 scrub ok
Dec 13 07:18:49 compute-0 ceph-mon[74928]: pgmap v261: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:49 compute-0 ceph-mon[74928]: 11.c scrub starts
Dec 13 07:18:49 compute-0 ceph-mon[74928]: 11.c scrub ok
Dec 13 07:18:49 compute-0 sudo[103799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hofwpxugrpmzgkadkxontietfqehcgxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610329.23523-242-71732488394101/AnsiballZ_getent.py'
Dec 13 07:18:49 compute-0 sudo[103799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:49 compute-0 python3.9[103801]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 07:18:49 compute-0 sudo[103799]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:50 compute-0 sudo[103952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tieeintkjakbsrlgswfmlrvjcioacjsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610329.8611767-252-49097521762731/AnsiballZ_getent.py'
Dec 13 07:18:50 compute-0 sudo[103952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:50 compute-0 python3.9[103954]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 07:18:50 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 13 07:18:50 compute-0 sudo[103952]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:50 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 13 07:18:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:50 compute-0 ceph-mon[74928]: 9.13 scrub starts
Dec 13 07:18:50 compute-0 ceph-mon[74928]: 9.13 scrub ok
Dec 13 07:18:50 compute-0 sudo[104105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfcszauabjrfhizntbilysdmzuyvmku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610330.3389056-260-193440780373722/AnsiballZ_group.py'
Dec 13 07:18:50 compute-0 sudo[104105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:50 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 13 07:18:50 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 13 07:18:50 compute-0 python3.9[104107]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 07:18:50 compute-0 sudo[104105]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:51 compute-0 sudo[104257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvsfxwktghkrorxhfqxurpeileuvmbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610330.985028-269-196677758891599/AnsiballZ_file.py'
Dec 13 07:18:51 compute-0 sudo[104257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:51 compute-0 python3.9[104259]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 07:18:51 compute-0 ceph-mon[74928]: 9.e scrub starts
Dec 13 07:18:51 compute-0 ceph-mon[74928]: 9.e scrub ok
Dec 13 07:18:51 compute-0 ceph-mon[74928]: pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:51 compute-0 sudo[104257]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:51 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 13 07:18:51 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 13 07:18:51 compute-0 sudo[104409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baimotkwugtqfbbjcnxvymlyulougjqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610331.5852888-280-178338649928938/AnsiballZ_dnf.py'
Dec 13 07:18:51 compute-0 sudo[104409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:51 compute-0 python3.9[104411]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:18:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 13 07:18:52 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 13 07:18:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:52 compute-0 ceph-mon[74928]: 10.1 scrub starts
Dec 13 07:18:52 compute-0 ceph-mon[74928]: 10.1 scrub ok
Dec 13 07:18:52 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 13 07:18:52 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 13 07:18:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:52 compute-0 sudo[104409]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:53 compute-0 sudo[104562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oklkorvtkxobzlydqwetsdjqidufxiqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610333.0351303-288-121563034472818/AnsiballZ_file.py'
Dec 13 07:18:53 compute-0 sudo[104562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 13 07:18:53 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 13 07:18:53 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 13 07:18:53 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 13 07:18:53 compute-0 ceph-mon[74928]: 10.16 scrub starts
Dec 13 07:18:53 compute-0 ceph-mon[74928]: 10.16 scrub ok
Dec 13 07:18:53 compute-0 ceph-mon[74928]: 9.19 scrub starts
Dec 13 07:18:53 compute-0 ceph-mon[74928]: 9.19 scrub ok
Dec 13 07:18:53 compute-0 ceph-mon[74928]: pgmap v263: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:53 compute-0 python3.9[104564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:18:53 compute-0 sudo[104562]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:53 compute-0 sudo[104714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfuyztihngztlchcnhptrncqobypqfll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610333.4940753-296-176966876388541/AnsiballZ_stat.py'
Dec 13 07:18:53 compute-0 sudo[104714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:53 compute-0 python3.9[104716]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:18:53 compute-0 sudo[104714]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:54 compute-0 sudo[104792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzllavzitshgpmxjpsvfaxlapuujxba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610333.4940753-296-176966876388541/AnsiballZ_file.py'
Dec 13 07:18:54 compute-0 sudo[104792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:54 compute-0 python3.9[104794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:18:54 compute-0 sudo[104792]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:54 compute-0 ceph-mon[74928]: 8.10 scrub starts
Dec 13 07:18:54 compute-0 ceph-mon[74928]: 8.10 scrub ok
Dec 13 07:18:54 compute-0 ceph-mon[74928]: 11.a scrub starts
Dec 13 07:18:54 compute-0 ceph-mon[74928]: 9.6 scrub starts
Dec 13 07:18:54 compute-0 ceph-mon[74928]: 11.a scrub ok
Dec 13 07:18:54 compute-0 ceph-mon[74928]: 9.6 scrub ok
Dec 13 07:18:54 compute-0 sudo[104944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwdxsjvwcljfogzfvvfijssgbeehwsry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610334.388984-309-270685689115940/AnsiballZ_stat.py'
Dec 13 07:18:54 compute-0 sudo[104944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:54 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 13 07:18:54 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 13 07:18:54 compute-0 python3.9[104946]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:18:54 compute-0 sudo[104944]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:54 compute-0 sudo[105022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mequzcqrzebruzcgjracsflhlttrhlre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610334.388984-309-270685689115940/AnsiballZ_file.py'
Dec 13 07:18:54 compute-0 sudo[105022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:55 compute-0 python3.9[105024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:18:55 compute-0 sudo[105022]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:55 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 13 07:18:55 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 13 07:18:55 compute-0 ceph-mon[74928]: pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:55 compute-0 sudo[105174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqtpaixistdjqtguxwceijbwtfgajnnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610335.3189979-324-245051996814814/AnsiballZ_dnf.py'
Dec 13 07:18:55 compute-0 sudo[105174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:55 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 13 07:18:55 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 13 07:18:55 compute-0 python3.9[105176]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:18:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:56 compute-0 ceph-mon[74928]: 11.10 scrub starts
Dec 13 07:18:56 compute-0 ceph-mon[74928]: 11.10 scrub ok
Dec 13 07:18:56 compute-0 ceph-mon[74928]: 8.0 scrub starts
Dec 13 07:18:56 compute-0 ceph-mon[74928]: 8.0 scrub ok
Dec 13 07:18:56 compute-0 sudo[105174]: pam_unix(sudo:session): session closed for user root
Dec 13 07:18:57 compute-0 python3.9[105327]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:18:57 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 13 07:18:57 compute-0 ceph-mon[74928]: 11.e scrub starts
Dec 13 07:18:57 compute-0 ceph-mon[74928]: 11.e scrub ok
Dec 13 07:18:57 compute-0 ceph-mon[74928]: pgmap v265: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:57 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 13 07:18:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:18:57 compute-0 python3.9[105479]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 07:18:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:58 compute-0 python3.9[105629]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:18:58 compute-0 ceph-mon[74928]: 8.7 scrub starts
Dec 13 07:18:58 compute-0 ceph-mon[74928]: 8.7 scrub ok
Dec 13 07:18:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 13 07:18:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 13 07:18:59 compute-0 sudo[105779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-redtigxioqqyhejqdauevgkrajqcewks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610338.5799062-365-164223737748624/AnsiballZ_systemd.py'
Dec 13 07:18:59 compute-0 sudo[105779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:18:59 compute-0 python3.9[105781]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:18:59 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 07:18:59 compute-0 ceph-mon[74928]: pgmap v266: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:18:59 compute-0 ceph-mon[74928]: 11.5 scrub starts
Dec 13 07:18:59 compute-0 ceph-mon[74928]: 11.5 scrub ok
Dec 13 07:18:59 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 07:18:59 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 07:18:59 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 07:18:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 07:18:59 compute-0 sudo[105779]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:00 compute-0 python3.9[105942]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 07:19:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:01 compute-0 ceph-mon[74928]: pgmap v267: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:01 compute-0 sudo[106092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwouvbvnqjhphsorchhynfxttbrpzzkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610341.2940788-422-63600890357716/AnsiballZ_systemd.py'
Dec 13 07:19:01 compute-0 sudo[106092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 13 07:19:01 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 13 07:19:01 compute-0 python3.9[106094]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:19:01 compute-0 sudo[106092]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:02 compute-0 sudo[106246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgsncaqlijdsrwmqaysediexdhmirri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610341.8882737-422-268748073413852/AnsiballZ_systemd.py'
Dec 13 07:19:02 compute-0 sudo[106246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:02 compute-0 python3.9[106248]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:19:02 compute-0 sudo[106246]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:02 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 13 07:19:02 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 13 07:19:02 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 13 07:19:02 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 13 07:19:02 compute-0 sshd-session[99588]: Connection closed by 192.168.122.30 port 45480
Dec 13 07:19:02 compute-0 sshd-session[99585]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:19:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:02 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 13 07:19:02 compute-0 systemd[1]: session-37.scope: Consumed 48.134s CPU time.
Dec 13 07:19:02 compute-0 systemd-logind[745]: Session 37 logged out. Waiting for processes to exit.
Dec 13 07:19:02 compute-0 systemd-logind[745]: Removed session 37.
Dec 13 07:19:03 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 13 07:19:03 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 13 07:19:03 compute-0 ceph-mon[74928]: 10.e scrub starts
Dec 13 07:19:03 compute-0 ceph-mon[74928]: 10.e scrub ok
Dec 13 07:19:03 compute-0 ceph-mon[74928]: pgmap v268: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:03 compute-0 ceph-mon[74928]: 8.5 scrub starts
Dec 13 07:19:03 compute-0 ceph-mon[74928]: 8.5 scrub ok
Dec 13 07:19:03 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 13 07:19:03 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 13 07:19:04 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 13 07:19:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:04 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 13 07:19:04 compute-0 ceph-mon[74928]: 10.d scrub starts
Dec 13 07:19:04 compute-0 ceph-mon[74928]: 10.d scrub ok
Dec 13 07:19:04 compute-0 ceph-mon[74928]: 9.7 scrub starts
Dec 13 07:19:04 compute-0 ceph-mon[74928]: 9.7 scrub ok
Dec 13 07:19:04 compute-0 ceph-mon[74928]: 11.7 scrub starts
Dec 13 07:19:04 compute-0 ceph-mon[74928]: 11.7 scrub ok
Dec 13 07:19:05 compute-0 ceph-mon[74928]: 9.c scrub starts
Dec 13 07:19:05 compute-0 ceph-mon[74928]: pgmap v269: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:05 compute-0 ceph-mon[74928]: 9.c scrub ok
Dec 13 07:19:05 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 13 07:19:05 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 13 07:19:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:06 compute-0 ceph-mon[74928]: 8.19 scrub starts
Dec 13 07:19:06 compute-0 ceph-mon[74928]: 8.19 scrub ok
Dec 13 07:19:07 compute-0 ceph-mon[74928]: pgmap v270: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:07 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 13 07:19:07 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 13 07:19:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:08 compute-0 sshd-session[106275]: Accepted publickey for zuul from 192.168.122.30 port 42788 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:19:08 compute-0 systemd-logind[745]: New session 38 of user zuul.
Dec 13 07:19:08 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 13 07:19:08 compute-0 sshd-session[106275]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:19:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:19:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:19:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:19:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:19:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:19:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:19:09 compute-0 python3.9[106428]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:09 compute-0 ceph-mon[74928]: 8.f scrub starts
Dec 13 07:19:09 compute-0 ceph-mon[74928]: 8.f scrub ok
Dec 13 07:19:09 compute-0 ceph-mon[74928]: pgmap v271: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:10 compute-0 sudo[106582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhixybtsqwxwzvyyvobmewgpzhseqhov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610349.6895049-36-77789781810766/AnsiballZ_getent.py'
Dec 13 07:19:10 compute-0 sudo[106582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:10 compute-0 python3.9[106584]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 07:19:10 compute-0 sudo[106582]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:10 compute-0 sudo[106735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxfhjpqlqlksfxcwwrosdlighqfqxutc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610350.3828692-48-271105375132235/AnsiballZ_setup.py'
Dec 13 07:19:10 compute-0 sudo[106735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:10 compute-0 python3.9[106737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:19:11 compute-0 sudo[106735]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:11 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 13 07:19:11 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 13 07:19:11 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 13 07:19:11 compute-0 sudo[106819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmfautqsfygxozqyxycufgtqiwjlwqku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610350.3828692-48-271105375132235/AnsiballZ_dnf.py'
Dec 13 07:19:11 compute-0 sudo[106819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:11 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 13 07:19:11 compute-0 ceph-mon[74928]: pgmap v272: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:11 compute-0 python3.9[106821]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 07:19:12 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 13 07:19:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:12 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 13 07:19:12 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 13 07:19:12 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 13 07:19:12 compute-0 ceph-mon[74928]: 9.f scrub starts
Dec 13 07:19:12 compute-0 ceph-mon[74928]: 9.f scrub ok
Dec 13 07:19:12 compute-0 ceph-mon[74928]: 11.1d scrub starts
Dec 13 07:19:12 compute-0 ceph-mon[74928]: 11.1d scrub ok
Dec 13 07:19:12 compute-0 sudo[106819]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:12 compute-0 sudo[106972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zktpstfmskpgjbakgkqzaruqiewtunij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610352.7228577-62-48167967308070/AnsiballZ_dnf.py'
Dec 13 07:19:12 compute-0 sudo[106972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:13 compute-0 python3.9[106974]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:13 compute-0 ceph-mon[74928]: 9.17 scrub starts
Dec 13 07:19:13 compute-0 ceph-mon[74928]: pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:13 compute-0 ceph-mon[74928]: 9.17 scrub ok
Dec 13 07:19:13 compute-0 ceph-mon[74928]: 8.1e scrub starts
Dec 13 07:19:13 compute-0 ceph-mon[74928]: 8.1e scrub ok
Dec 13 07:19:14 compute-0 sudo[106972]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 13 07:19:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 13 07:19:14 compute-0 sudo[107125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkzcpoqkkzcehnjjbhyafqwzuexghqvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610354.174257-70-54112543881234/AnsiballZ_systemd.py'
Dec 13 07:19:14 compute-0 sudo[107125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:14 compute-0 python3.9[107127]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:19:14 compute-0 sudo[107125]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:15 compute-0 ceph-mon[74928]: pgmap v274: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:15 compute-0 python3.9[107280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:15 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 13 07:19:15 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 13 07:19:15 compute-0 sudo[107430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrirtxfyhuyvhmjytjynxtxlmvewabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610355.638402-88-111515545645637/AnsiballZ_sefcontext.py'
Dec 13 07:19:15 compute-0 sudo[107430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:16 compute-0 python3.9[107432]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 07:19:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:16 compute-0 sudo[107430]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:16 compute-0 ceph-mon[74928]: 8.6 scrub starts
Dec 13 07:19:16 compute-0 ceph-mon[74928]: 8.6 scrub ok
Dec 13 07:19:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 13 07:19:16 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 13 07:19:16 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 13 07:19:16 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 13 07:19:16 compute-0 python3.9[107582]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:17 compute-0 ceph-mon[74928]: 10.15 scrub starts
Dec 13 07:19:17 compute-0 ceph-mon[74928]: 10.15 scrub ok
Dec 13 07:19:17 compute-0 ceph-mon[74928]: pgmap v275: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:17 compute-0 ceph-mon[74928]: 8.13 scrub starts
Dec 13 07:19:17 compute-0 ceph-mon[74928]: 8.13 scrub ok
Dec 13 07:19:17 compute-0 sudo[107738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpufduylffuyoutchckhcckebaslmgls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610357.187039-106-261127260407140/AnsiballZ_dnf.py'
Dec 13 07:19:17 compute-0 sudo[107738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:17 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 13 07:19:17 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 13 07:19:17 compute-0 python3.9[107740]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:17 compute-0 sudo[107742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:19:17 compute-0 sudo[107742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:17 compute-0 sudo[107742]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:17 compute-0 sudo[107767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:19:17 compute-0 sudo[107767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:18 compute-0 sudo[107767]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:19:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:19:18 compute-0 sudo[107822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:19:18 compute-0 sudo[107822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:18 compute-0 sudo[107822]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:18 compute-0 sudo[107847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:19:18 compute-0 sudo[107847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:18 compute-0 ceph-mon[74928]: 8.b scrub starts
Dec 13 07:19:18 compute-0 ceph-mon[74928]: 8.b scrub ok
Dec 13 07:19:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:19:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:19:18 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.511645749 +0000 UTC m=+0.026316586 container create cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:19:18 compute-0 systemd[76210]: Created slice User Background Tasks Slice.
Dec 13 07:19:18 compute-0 systemd[1]: Started libpod-conmon-cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842.scope.
Dec 13 07:19:18 compute-0 systemd[76210]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 07:19:18 compute-0 systemd[76210]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 07:19:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.550824148 +0000 UTC m=+0.065494995 container init cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.556671691 +0000 UTC m=+0.071342508 container start cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.558461803 +0000 UTC m=+0.073132629 container attach cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:19:18 compute-0 admiring_moser[107897]: 167 167
Dec 13 07:19:18 compute-0 systemd[1]: libpod-cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842.scope: Deactivated successfully.
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.560894194 +0000 UTC m=+0.075565022 container died cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 07:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bf3fe1c90b19e3f1b7a5a6ac32a3fd91e83ec5ecee55183848eb0b9d99aa31e-merged.mount: Deactivated successfully.
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.578091253 +0000 UTC m=+0.092762080 container remove cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:19:18 compute-0 podman[107882]: 2025-12-13 07:19:18.501425319 +0000 UTC m=+0.016096166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:19:18 compute-0 systemd[1]: libpod-conmon-cbd959b4c67faa3b04e081a96dd1264ebae14b105f50adf9193df32640e0a842.scope: Deactivated successfully.
Dec 13 07:19:18 compute-0 sudo[107738]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:18 compute-0 podman[107919]: 2025-12-13 07:19:18.695336814 +0000 UTC m=+0.030295530 container create 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:19:18 compute-0 systemd[1]: Started libpod-conmon-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope.
Dec 13 07:19:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:18 compute-0 podman[107919]: 2025-12-13 07:19:18.761897846 +0000 UTC m=+0.096856562 container init 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:19:18 compute-0 podman[107919]: 2025-12-13 07:19:18.766645286 +0000 UTC m=+0.101603983 container start 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:19:18 compute-0 podman[107919]: 2025-12-13 07:19:18.768137577 +0000 UTC m=+0.103096293 container attach 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 07:19:18 compute-0 podman[107919]: 2025-12-13 07:19:18.683978832 +0000 UTC m=+0.018937558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:19:19 compute-0 sudo[108098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvmdlzcmmozatjrailufhivriocmyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610358.7917755-114-142776696351770/AnsiballZ_command.py'
Dec 13 07:19:19 compute-0 sudo[108098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:19 compute-0 eager_carver[107956]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:19:19 compute-0 eager_carver[107956]: --> All data devices are unavailable
Dec 13 07:19:19 compute-0 systemd[1]: libpod-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope: Deactivated successfully.
Dec 13 07:19:19 compute-0 conmon[107956]: conmon 829d96c277523a10477a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope/container/memory.events
Dec 13 07:19:19 compute-0 podman[107919]: 2025-12-13 07:19:19.141747646 +0000 UTC m=+0.476706352 container died 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:19:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5bad39328723fb4bc64534ca9a9877d734ce9303056f7175f3960876af5a4d-merged.mount: Deactivated successfully.
Dec 13 07:19:19 compute-0 podman[107919]: 2025-12-13 07:19:19.165153398 +0000 UTC m=+0.500112104 container remove 829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:19:19 compute-0 systemd[1]: libpod-conmon-829d96c277523a10477ad48318abda630036a29c6a8c22d6ab30f2a9592c0d91.scope: Deactivated successfully.
Dec 13 07:19:19 compute-0 sudo[107847]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:19 compute-0 sudo[108114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:19:19 compute-0 sudo[108114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:19 compute-0 sudo[108114]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:19 compute-0 python3.9[108102]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:19:19 compute-0 sudo[108139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:19:19 compute-0 sudo[108139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:19 compute-0 ceph-mon[74928]: 10.9 scrub starts
Dec 13 07:19:19 compute-0 ceph-mon[74928]: 10.9 scrub ok
Dec 13 07:19:19 compute-0 ceph-mon[74928]: pgmap v276: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.505200525 +0000 UTC m=+0.028010496 container create 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:19:19 compute-0 systemd[1]: Started libpod-conmon-14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1.scope.
Dec 13 07:19:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.553740716 +0000 UTC m=+0.076550687 container init 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.558088174 +0000 UTC m=+0.080898145 container start 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 07:19:19 compute-0 boring_ganguly[108194]: 167 167
Dec 13 07:19:19 compute-0 systemd[1]: libpod-14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1.scope: Deactivated successfully.
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.561525427 +0000 UTC m=+0.084335399 container attach 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.56175469 +0000 UTC m=+0.084564660 container died 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:19:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2023caa6f189106792b0e203dfd46e7ee947f491e000559a7150e25e8e3b64c8-merged.mount: Deactivated successfully.
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.577360442 +0000 UTC m=+0.100170413 container remove 14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 07:19:19 compute-0 podman[108181]: 2025-12-13 07:19:19.493187119 +0000 UTC m=+0.015997110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:19:19 compute-0 systemd[1]: libpod-conmon-14386efd1faaf948bad73d74ba9f5d45be89a5d22539a15e89cd8c118c8a16f1.scope: Deactivated successfully.
Dec 13 07:19:19 compute-0 podman[108324]: 2025-12-13 07:19:19.695251047 +0000 UTC m=+0.027677119 container create cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:19:19 compute-0 systemd[1]: Started libpod-conmon-cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6.scope.
Dec 13 07:19:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:19 compute-0 podman[108324]: 2025-12-13 07:19:19.750785441 +0000 UTC m=+0.083211532 container init cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:19:19 compute-0 podman[108324]: 2025-12-13 07:19:19.755911536 +0000 UTC m=+0.088337607 container start cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 07:19:19 compute-0 podman[108324]: 2025-12-13 07:19:19.757240279 +0000 UTC m=+0.089666360 container attach cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:19:19 compute-0 podman[108324]: 2025-12-13 07:19:19.683723136 +0000 UTC m=+0.016149227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:19:19 compute-0 sudo[108098]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:19 compute-0 clever_albattani[108358]: {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:     "0": [
Dec 13 07:19:19 compute-0 clever_albattani[108358]:         {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "devices": [
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "/dev/loop3"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             ],
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_name": "ceph_lv0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_size": "21470642176",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "name": "ceph_lv0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "tags": {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cluster_name": "ceph",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.crush_device_class": "",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.encrypted": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.objectstore": "bluestore",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osd_id": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.type": "block",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.vdo": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.with_tpm": "0"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             },
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "type": "block",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "vg_name": "ceph_vg0"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:         }
Dec 13 07:19:19 compute-0 clever_albattani[108358]:     ],
Dec 13 07:19:19 compute-0 clever_albattani[108358]:     "1": [
Dec 13 07:19:19 compute-0 clever_albattani[108358]:         {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "devices": [
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "/dev/loop4"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             ],
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_name": "ceph_lv1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_size": "21470642176",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "name": "ceph_lv1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "tags": {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cluster_name": "ceph",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.crush_device_class": "",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.encrypted": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.objectstore": "bluestore",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osd_id": "1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.type": "block",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.vdo": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.with_tpm": "0"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             },
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "type": "block",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "vg_name": "ceph_vg1"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:         }
Dec 13 07:19:19 compute-0 clever_albattani[108358]:     ],
Dec 13 07:19:19 compute-0 clever_albattani[108358]:     "2": [
Dec 13 07:19:19 compute-0 clever_albattani[108358]:         {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "devices": [
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "/dev/loop5"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             ],
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_name": "ceph_lv2",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_size": "21470642176",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "name": "ceph_lv2",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "tags": {
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.cluster_name": "ceph",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.crush_device_class": "",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.encrypted": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.objectstore": "bluestore",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osd_id": "2",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.type": "block",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.vdo": "0",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:                 "ceph.with_tpm": "0"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             },
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "type": "block",
Dec 13 07:19:19 compute-0 clever_albattani[108358]:             "vg_name": "ceph_vg2"
Dec 13 07:19:19 compute-0 clever_albattani[108358]:         }
Dec 13 07:19:19 compute-0 clever_albattani[108358]:     ]
Dec 13 07:19:19 compute-0 clever_albattani[108358]: }
Dec 13 07:19:19 compute-0 podman[108324]: 2025-12-13 07:19:19.987960036 +0000 UTC m=+0.320386117 container died cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:19:19 compute-0 systemd[1]: libpod-cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6.scope: Deactivated successfully.
Dec 13 07:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9301518057fdbd103026b9bdcc04b54cf8cdd6557fda654ddf2ed891af1880e0-merged.mount: Deactivated successfully.
Dec 13 07:19:20 compute-0 podman[108324]: 2025-12-13 07:19:20.01132933 +0000 UTC m=+0.343755401 container remove cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_albattani, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:19:20 compute-0 systemd[1]: libpod-conmon-cbe6b89aede74110a9aa1715475ed2d6b4f07bfb210565644f666dd08e57eba6.scope: Deactivated successfully.
Dec 13 07:19:20 compute-0 sudo[108139]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:20 compute-0 sudo[108452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:19:20 compute-0 sudo[108452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:20 compute-0 sudo[108452]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:20 compute-0 sudo[108477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:19:20 compute-0 sudo[108477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:20 compute-0 sudo[108575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fopivcnwymdbiufvdnyhusvakenuevpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610359.9274986-122-120568767960689/AnsiballZ_file.py'
Dec 13 07:19:20 compute-0 sudo[108575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.352981451 +0000 UTC m=+0.028892939 container create 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:19:20 compute-0 systemd[1]: Started libpod-conmon-45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70.scope.
Dec 13 07:19:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.406940897 +0000 UTC m=+0.082852395 container init 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.411421096 +0000 UTC m=+0.087332574 container start 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.413465687 +0000 UTC m=+0.089377164 container attach 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:19:20 compute-0 suspicious_easley[108600]: 167 167
Dec 13 07:19:20 compute-0 python3.9[108577]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 07:19:20 compute-0 systemd[1]: libpod-45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70.scope: Deactivated successfully.
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.416320954 +0000 UTC m=+0.092232433 container died 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7926541ee0a6a86053c0887cc061dd8b57d189504e34ff199321459096afecbf-merged.mount: Deactivated successfully.
Dec 13 07:19:20 compute-0 sudo[108575]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.436350919 +0000 UTC m=+0.112262397 container remove 45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_easley, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:19:20 compute-0 podman[108587]: 2025-12-13 07:19:20.341501088 +0000 UTC m=+0.017412586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:19:20 compute-0 systemd[1]: libpod-conmon-45d1351cf9bd7a6aebf6ebde9c3026028f545b83b9fd19a92f0e05921b000f70.scope: Deactivated successfully.
Dec 13 07:19:20 compute-0 podman[108646]: 2025-12-13 07:19:20.548846764 +0000 UTC m=+0.028061272 container create 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 07:19:20 compute-0 systemd[1]: Started libpod-conmon-7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180.scope.
Dec 13 07:19:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:19:20 compute-0 podman[108646]: 2025-12-13 07:19:20.60415452 +0000 UTC m=+0.083369048 container init 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 07:19:20 compute-0 podman[108646]: 2025-12-13 07:19:20.610533335 +0000 UTC m=+0.089747843 container start 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:19:20 compute-0 podman[108646]: 2025-12-13 07:19:20.611674864 +0000 UTC m=+0.090889373 container attach 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 07:19:20 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 13 07:19:20 compute-0 podman[108646]: 2025-12-13 07:19:20.536787812 +0000 UTC m=+0.016002340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:19:20 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 13 07:19:20 compute-0 python3.9[108799]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:19:21 compute-0 lvm[108941]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:19:21 compute-0 lvm[108936]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:19:21 compute-0 lvm[108941]: VG ceph_vg1 finished
Dec 13 07:19:21 compute-0 lvm[108945]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:19:21 compute-0 lvm[108945]: VG ceph_vg2 finished
Dec 13 07:19:21 compute-0 lvm[108936]: VG ceph_vg0 finished
Dec 13 07:19:21 compute-0 trusting_banach[108697]: {}
Dec 13 07:19:21 compute-0 lvm[108950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:19:21 compute-0 lvm[108950]: VG ceph_vg0 finished
Dec 13 07:19:21 compute-0 podman[108646]: 2025-12-13 07:19:21.234106375 +0000 UTC m=+0.713320883 container died 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:19:21 compute-0 systemd[1]: libpod-7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180.scope: Deactivated successfully.
Dec 13 07:19:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9038cf6c21dadbf088a664290d4b8fd0a6e1cdddbdeae55ed126fa74c7032e0-merged.mount: Deactivated successfully.
Dec 13 07:19:21 compute-0 podman[108646]: 2025-12-13 07:19:21.266055999 +0000 UTC m=+0.745270508 container remove 7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_banach, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:19:21 compute-0 systemd[1]: libpod-conmon-7b5a14aa1de76fb1bfe76a92735f891d19e1474009d6b4aed1261429ca0c6180.scope: Deactivated successfully.
Dec 13 07:19:21 compute-0 sudo[108477]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:19:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:19:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:19:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:19:21 compute-0 sudo[109006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:19:21 compute-0 sudo[109006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:19:21 compute-0 sudo[109006]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:21 compute-0 sudo[109055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexxjmpnlfqyjitokgjlbwpywudzhzjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610361.1393044-138-166559742281595/AnsiballZ_dnf.py'
Dec 13 07:19:21 compute-0 sudo[109055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:21 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 13 07:19:21 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 13 07:19:21 compute-0 ceph-mon[74928]: pgmap v277: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:21 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:19:21 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:19:21 compute-0 python3.9[109059]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:22 compute-0 ceph-mon[74928]: 9.1e scrub starts
Dec 13 07:19:22 compute-0 ceph-mon[74928]: 9.1e scrub ok
Dec 13 07:19:22 compute-0 ceph-mon[74928]: 10.10 scrub starts
Dec 13 07:19:22 compute-0 ceph-mon[74928]: 10.10 scrub ok
Dec 13 07:19:22 compute-0 sudo[109055]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:22 compute-0 sudo[109210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faxieubfedjqhbwoqbzhctvoezgberod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610362.690357-147-134858839589235/AnsiballZ_dnf.py'
Dec 13 07:19:22 compute-0 sudo[109210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:23 compute-0 python3.9[109212]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:23 compute-0 ceph-mon[74928]: pgmap v278: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:24 compute-0 sudo[109210]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:24 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 13 07:19:24 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 13 07:19:24 compute-0 sudo[109363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvscgtystbuxjjftayftjylndgdlcrhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610364.268645-159-231826773153547/AnsiballZ_stat.py'
Dec 13 07:19:24 compute-0 sudo[109363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:24 compute-0 python3.9[109365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:19:24 compute-0 sudo[109363]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:25 compute-0 sudo[109517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibaqwtzkcutdabikbtrfxobmknhxrlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610364.741357-167-235657798454975/AnsiballZ_slurp.py'
Dec 13 07:19:25 compute-0 sudo[109517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:25 compute-0 python3.9[109519]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 13 07:19:25 compute-0 sudo[109517]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:25 compute-0 ceph-mon[74928]: pgmap v279: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:25 compute-0 ceph-mon[74928]: 10.13 scrub starts
Dec 13 07:19:25 compute-0 ceph-mon[74928]: 10.13 scrub ok
Dec 13 07:19:25 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 13 07:19:25 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 13 07:19:25 compute-0 sshd-session[106278]: Connection closed by 192.168.122.30 port 42788
Dec 13 07:19:25 compute-0 sshd-session[106275]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:19:25 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 13 07:19:25 compute-0 systemd[1]: session-38.scope: Consumed 13.035s CPU time.
Dec 13 07:19:25 compute-0 systemd-logind[745]: Session 38 logged out. Waiting for processes to exit.
Dec 13 07:19:25 compute-0 systemd-logind[745]: Removed session 38.
Dec 13 07:19:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:27 compute-0 ceph-mon[74928]: 9.1c scrub starts
Dec 13 07:19:27 compute-0 ceph-mon[74928]: 9.1c scrub ok
Dec 13 07:19:27 compute-0 ceph-mon[74928]: pgmap v280: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:27 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 13 07:19:27 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 13 07:19:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:28 compute-0 ceph-mon[74928]: 10.11 scrub starts
Dec 13 07:19:28 compute-0 ceph-mon[74928]: 10.11 scrub ok
Dec 13 07:19:29 compute-0 ceph-mon[74928]: pgmap v281: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:30 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 13 07:19:30 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 13 07:19:31 compute-0 sshd-session[109544]: Accepted publickey for zuul from 192.168.122.30 port 43754 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:19:31 compute-0 systemd-logind[745]: New session 39 of user zuul.
Dec 13 07:19:31 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 13 07:19:31 compute-0 sshd-session[109544]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:19:31 compute-0 ceph-mon[74928]: pgmap v282: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:31 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 13 07:19:31 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 13 07:19:31 compute-0 python3.9[109697]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:32 compute-0 ceph-mon[74928]: 9.1b scrub starts
Dec 13 07:19:32 compute-0 ceph-mon[74928]: 9.1b scrub ok
Dec 13 07:19:32 compute-0 python3.9[109851]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:19:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 13 07:19:32 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 13 07:19:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:33 compute-0 python3.9[110044]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:19:33 compute-0 ceph-mon[74928]: 9.1d scrub starts
Dec 13 07:19:33 compute-0 ceph-mon[74928]: 9.1d scrub ok
Dec 13 07:19:33 compute-0 ceph-mon[74928]: pgmap v283: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:33 compute-0 sshd-session[109547]: Connection closed by 192.168.122.30 port 43754
Dec 13 07:19:33 compute-0 sshd-session[109544]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:19:33 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 13 07:19:33 compute-0 systemd[1]: session-39.scope: Consumed 1.638s CPU time.
Dec 13 07:19:33 compute-0 systemd-logind[745]: Session 39 logged out. Waiting for processes to exit.
Dec 13 07:19:33 compute-0 systemd-logind[745]: Removed session 39.
Dec 13 07:19:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:34 compute-0 ceph-mon[74928]: 9.3 scrub starts
Dec 13 07:19:34 compute-0 ceph-mon[74928]: 9.3 scrub ok
Dec 13 07:19:35 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 13 07:19:35 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 13 07:19:35 compute-0 ceph-mon[74928]: pgmap v284: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:35 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 13 07:19:35 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 13 07:19:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 13 07:19:36 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 13 07:19:36 compute-0 ceph-mon[74928]: 10.1a scrub starts
Dec 13 07:19:36 compute-0 ceph-mon[74928]: 10.1a scrub ok
Dec 13 07:19:37 compute-0 ceph-mon[74928]: 9.d scrub starts
Dec 13 07:19:37 compute-0 ceph-mon[74928]: 9.d scrub ok
Dec 13 07:19:37 compute-0 ceph-mon[74928]: pgmap v285: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:37 compute-0 ceph-mon[74928]: 10.19 scrub starts
Dec 13 07:19:37 compute-0 ceph-mon[74928]: 10.19 scrub ok
Dec 13 07:19:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:19:38
Dec 13 07:19:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:19:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:19:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Dec 13 07:19:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:19:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:38 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 13 07:19:38 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:19:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:19:39 compute-0 sshd-session[110070]: Accepted publickey for zuul from 192.168.122.30 port 44396 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:19:39 compute-0 systemd-logind[745]: New session 40 of user zuul.
Dec 13 07:19:39 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 13 07:19:39 compute-0 sshd-session[110070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:19:39 compute-0 ceph-mon[74928]: pgmap v286: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:39 compute-0 ceph-mon[74928]: 10.6 scrub starts
Dec 13 07:19:39 compute-0 ceph-mon[74928]: 10.6 scrub ok
Dec 13 07:19:39 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 13 07:19:39 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 13 07:19:40 compute-0 python3.9[110223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:40 compute-0 python3.9[110377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:41 compute-0 sudo[110531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfiuiewfvvtnjneksnttiocppigrhjto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610381.227557-40-24800202036321/AnsiballZ_setup.py'
Dec 13 07:19:41 compute-0 sudo[110531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:41 compute-0 ceph-mon[74928]: 9.1 scrub starts
Dec 13 07:19:41 compute-0 ceph-mon[74928]: 9.1 scrub ok
Dec 13 07:19:41 compute-0 ceph-mon[74928]: pgmap v287: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:41 compute-0 python3.9[110533]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:19:41 compute-0 sudo[110531]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:42 compute-0 sudo[110615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwhxbxglvmetjxaybozzdltjpvsaslhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610381.227557-40-24800202036321/AnsiballZ_dnf.py'
Dec 13 07:19:42 compute-0 sudo[110615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:42 compute-0 python3.9[110617]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:43 compute-0 sudo[110615]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:43 compute-0 ceph-mon[74928]: pgmap v288: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:43 compute-0 sudo[110768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcgamyjhbwgrishktubafzfbqoswgqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610383.4591017-52-224814709788007/AnsiballZ_setup.py'
Dec 13 07:19:43 compute-0 sudo[110768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:43 compute-0 python3.9[110770]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:19:44 compute-0 sudo[110768]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:44 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 13 07:19:44 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 13 07:19:44 compute-0 sudo[110963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflasxgqynfigrfjuzjyediznrpgzftw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610384.2668347-63-120688820106107/AnsiballZ_file.py'
Dec 13 07:19:44 compute-0 sudo[110963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:44 compute-0 python3.9[110965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:19:44 compute-0 sudo[110963]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:45 compute-0 sudo[111115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpnsoafcsgzymxlxhuresezpcpneivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610384.902688-71-113336706736571/AnsiballZ_command.py'
Dec 13 07:19:45 compute-0 sudo[111115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:45 compute-0 python3.9[111117]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:19:45 compute-0 sudo[111115]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:45 compute-0 ceph-mon[74928]: pgmap v289: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:45 compute-0 ceph-mon[74928]: 10.2 scrub starts
Dec 13 07:19:45 compute-0 ceph-mon[74928]: 10.2 scrub ok
Dec 13 07:19:45 compute-0 sudo[111277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwvjnvynotfqszfddbeynwqnobtxbosq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610385.544478-79-231104749599685/AnsiballZ_stat.py'
Dec 13 07:19:45 compute-0 sudo[111277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:46 compute-0 python3.9[111279]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:19:46 compute-0 sudo[111277]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:46 compute-0 sudo[111355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhaczvgwwuymwmwlzcpyvhejjfkjyjyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610385.544478-79-231104749599685/AnsiballZ_file.py'
Dec 13 07:19:46 compute-0 sudo[111355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:46 compute-0 python3.9[111357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:19:46 compute-0 sudo[111355]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:46 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 13 07:19:46 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 13 07:19:46 compute-0 sudo[111507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqzeqdsiunoqkmkkftnbmcobsghcuwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610386.4671197-91-13123545124334/AnsiballZ_stat.py'
Dec 13 07:19:46 compute-0 sudo[111507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:46 compute-0 python3.9[111509]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:19:46 compute-0 sudo[111507]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:46 compute-0 sudo[111585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddwxfdaichvtrqfrsxjbagwbskshxds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610386.4671197-91-13123545124334/AnsiballZ_file.py'
Dec 13 07:19:46 compute-0 sudo[111585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:47 compute-0 python3.9[111587]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:19:47 compute-0 sudo[111585]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:47 compute-0 ceph-mon[74928]: pgmap v290: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:47 compute-0 ceph-mon[74928]: 10.f scrub starts
Dec 13 07:19:47 compute-0 ceph-mon[74928]: 10.f scrub ok
Dec 13 07:19:47 compute-0 sudo[111737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyieyhexzalekufmdredlezozxuqleml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610387.313603-104-70530830684931/AnsiballZ_ini_file.py'
Dec 13 07:19:47 compute-0 sudo[111737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:47 compute-0 python3.9[111739]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:19:47 compute-0 sudo[111737]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:48 compute-0 sudo[111889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxzaqvzgzyqnnbnvqsyqftgkwqcfmukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610387.8812854-104-200204530503408/AnsiballZ_ini_file.py'
Dec 13 07:19:48 compute-0 sudo[111889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:48 compute-0 python3.9[111891]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:19:48 compute-0 sudo[111889]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:19:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:19:48 compute-0 sudo[112041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thhqvxnoijivttghawtxswssegtfealb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610388.449256-104-199873540683099/AnsiballZ_ini_file.py'
Dec 13 07:19:48 compute-0 sudo[112041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:48 compute-0 python3.9[112043]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:19:48 compute-0 sudo[112041]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:49 compute-0 sudo[112193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwevqqlmamwgppjgigpuqefykfgoekil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610388.8784912-104-59181739612171/AnsiballZ_ini_file.py'
Dec 13 07:19:49 compute-0 sudo[112193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:49 compute-0 python3.9[112195]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:19:49 compute-0 sudo[112193]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:49 compute-0 ceph-mon[74928]: pgmap v291: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:49 compute-0 sudo[112345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyqrchoewfkxllxgnlretfrhnjfmeaxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610389.3898988-135-121592211102189/AnsiballZ_dnf.py'
Dec 13 07:19:49 compute-0 sudo[112345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:49 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 13 07:19:49 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 13 07:19:49 compute-0 python3.9[112347]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:50 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 13 07:19:50 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 13 07:19:50 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 13 07:19:50 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 13 07:19:50 compute-0 sudo[112345]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:51 compute-0 sudo[112498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzbfvrquczzqdzoedkjnndmepvdjhyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610391.06933-146-203458295953150/AnsiballZ_setup.py'
Dec 13 07:19:51 compute-0 sudo[112498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:51 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 13 07:19:51 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 13 07:19:51 compute-0 python3.9[112500]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:19:51 compute-0 ceph-mon[74928]: 9.9 scrub starts
Dec 13 07:19:51 compute-0 ceph-mon[74928]: 9.9 scrub ok
Dec 13 07:19:51 compute-0 ceph-mon[74928]: pgmap v292: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:51 compute-0 ceph-mon[74928]: 10.b scrub starts
Dec 13 07:19:51 compute-0 ceph-mon[74928]: 10.b scrub ok
Dec 13 07:19:51 compute-0 sudo[112498]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:51 compute-0 sudo[112652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epwesqswsftviixznabhpdhtcwvwhawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610391.6397386-154-231641762392823/AnsiballZ_stat.py'
Dec 13 07:19:51 compute-0 sudo[112652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:51 compute-0 python3.9[112654]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:19:51 compute-0 sudo[112652]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:52 compute-0 sudo[112804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iihzsnaauxivkbgzqrezokfttxylvpte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610392.2141373-163-246401202047564/AnsiballZ_stat.py'
Dec 13 07:19:52 compute-0 sudo[112804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:52 compute-0 ceph-mon[74928]: 9.16 scrub starts
Dec 13 07:19:52 compute-0 ceph-mon[74928]: 9.16 scrub ok
Dec 13 07:19:52 compute-0 ceph-mon[74928]: 10.12 scrub starts
Dec 13 07:19:52 compute-0 ceph-mon[74928]: 10.12 scrub ok
Dec 13 07:19:52 compute-0 python3.9[112806]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:19:52 compute-0 sudo[112804]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:52 compute-0 sudo[112956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fivjmjdxsdavyjnvvapnptbzypaupbiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610392.7498758-173-278154736684423/AnsiballZ_command.py'
Dec 13 07:19:52 compute-0 sudo[112956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:53 compute-0 python3.9[112958]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:19:53 compute-0 sudo[112956]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:53 compute-0 ceph-mon[74928]: pgmap v293: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:53 compute-0 sudo[113109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkeotkptkcqbpcsdmxqijreyrbuzpeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610393.2723842-183-176821726686768/AnsiballZ_service_facts.py'
Dec 13 07:19:53 compute-0 sudo[113109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:53 compute-0 python3.9[113111]: ansible-service_facts Invoked
Dec 13 07:19:53 compute-0 network[113128]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:19:53 compute-0 network[113129]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:19:53 compute-0 network[113130]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:19:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:55 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 13 07:19:55 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 13 07:19:55 compute-0 ceph-mon[74928]: pgmap v294: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:55 compute-0 sudo[113109]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:56 compute-0 sudo[113413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pynzmnavlxmayyjteloslscpxxbbiesa ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765610396.1672318-198-174219132852191/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765610396.1672318-198-174219132852191/args'
Dec 13 07:19:56 compute-0 sudo[113413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:56 compute-0 sudo[113413]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:56 compute-0 ceph-mon[74928]: 10.14 scrub starts
Dec 13 07:19:56 compute-0 ceph-mon[74928]: 10.14 scrub ok
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.530942) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396531019, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7257, "num_deletes": 251, "total_data_size": 9856231, "memory_usage": 10017920, "flush_reason": "Manual Compaction"}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396544647, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7723937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 126, "largest_seqno": 7380, "table_properties": {"data_size": 7697472, "index_size": 17070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77106, "raw_average_key_size": 23, "raw_value_size": 7634415, "raw_average_value_size": 2308, "num_data_blocks": 750, "num_entries": 3307, "num_filter_entries": 3307, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610002, "oldest_key_time": 1765610002, "file_creation_time": 1765610396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 13733 microseconds, and 11146 cpu microseconds.
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.544679) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7723937 bytes OK
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.544694) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.545784) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.545817) EVENT_LOG_v1 {"time_micros": 1765610396545810, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.545846) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9824501, prev total WAL file size 9824501, number of live WAL files 2.
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.547886) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7542KB) 13(47KB) 8(1944B)]
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396547954, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7774062, "oldest_snapshot_seqno": -1}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3118 keys, 7732963 bytes, temperature: kUnknown
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396563824, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7732963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7707034, "index_size": 17051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 75132, "raw_average_key_size": 24, "raw_value_size": 7645568, "raw_average_value_size": 2452, "num_data_blocks": 751, "num_entries": 3118, "num_filter_entries": 3118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.563977) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7732963 bytes
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.564278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 488.3 rd, 485.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3406, records dropped: 288 output_compression: NoCompression
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.564292) EVENT_LOG_v1 {"time_micros": 1765610396564285, "job": 4, "event": "compaction_finished", "compaction_time_micros": 15920, "compaction_time_cpu_micros": 12875, "output_level": 6, "num_output_files": 1, "total_output_size": 7732963, "num_input_records": 3406, "num_output_records": 3118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396565385, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396565451, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610396565482, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 13 07:19:56 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:19:56.547823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:19:56 compute-0 sudo[113581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfqreywhuxwprtyjaxynykbjglridwbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610396.6512873-209-232731245907038/AnsiballZ_dnf.py'
Dec 13 07:19:56 compute-0 sudo[113581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:57 compute-0 python3.9[113583]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:19:57 compute-0 ceph-mon[74928]: pgmap v295: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:19:58 compute-0 sudo[113581]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 13 07:19:58 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 13 07:19:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:58 compute-0 sudo[113734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmchnsiekmfcyhdmngikbxolscpgzmtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610398.2766497-222-279321127066214/AnsiballZ_package_facts.py'
Dec 13 07:19:58 compute-0 sudo[113734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:58 compute-0 python3.9[113736]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 07:19:59 compute-0 sudo[113734]: pam_unix(sudo:session): session closed for user root
Dec 13 07:19:59 compute-0 ceph-mon[74928]: 9.15 scrub starts
Dec 13 07:19:59 compute-0 ceph-mon[74928]: 9.15 scrub ok
Dec 13 07:19:59 compute-0 ceph-mon[74928]: pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:19:59 compute-0 sudo[113886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfglnzdbwpddilujwophhyhmxhhxkofk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610399.5922549-232-33842866712735/AnsiballZ_stat.py'
Dec 13 07:19:59 compute-0 sudo[113886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:19:59 compute-0 python3.9[113888]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:19:59 compute-0 sudo[113886]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:00 compute-0 sudo[113964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfujcjvzfieusmobxfmrsmtywulwqvbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610399.5922549-232-33842866712735/AnsiballZ_file.py'
Dec 13 07:20:00 compute-0 sudo[113964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:00 compute-0 python3.9[113966]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:00 compute-0 sudo[113964]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:00 compute-0 sudo[114116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eixtlpireyeqjdivonfhhmkqgfmzvtau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610400.5193758-244-87078216461477/AnsiballZ_stat.py'
Dec 13 07:20:00 compute-0 sudo[114116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:00 compute-0 python3.9[114118]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:00 compute-0 sudo[114116]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:01 compute-0 sudo[114194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyslgedwcdyinmgxizwhuusydphmmuly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610400.5193758-244-87078216461477/AnsiballZ_file.py'
Dec 13 07:20:01 compute-0 sudo[114194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:01 compute-0 python3.9[114196]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:01 compute-0 sudo[114194]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:01 compute-0 ceph-mon[74928]: pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:01 compute-0 sudo[114346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haxdxtmcxcqtmdyriaagngladsuftgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610401.5899274-262-23650003689449/AnsiballZ_lineinfile.py'
Dec 13 07:20:01 compute-0 sudo[114346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:02 compute-0 python3.9[114348]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:02 compute-0 sudo[114346]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:02 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 13 07:20:02 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 13 07:20:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:02 compute-0 sudo[114498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwofctaujmxsvzdjsylkrdgcadndybr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610402.527189-277-114769339620040/AnsiballZ_setup.py'
Dec 13 07:20:02 compute-0 sudo[114498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:02 compute-0 python3.9[114500]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:20:03 compute-0 sudo[114498]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:03 compute-0 ceph-mon[74928]: 9.14 scrub starts
Dec 13 07:20:03 compute-0 ceph-mon[74928]: 9.14 scrub ok
Dec 13 07:20:03 compute-0 ceph-mon[74928]: pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:03 compute-0 sudo[114582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwmzktktyhbrjzjtikxloarktnvziun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610402.527189-277-114769339620040/AnsiballZ_systemd.py'
Dec 13 07:20:03 compute-0 sudo[114582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:03 compute-0 python3.9[114584]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:20:03 compute-0 sudo[114582]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:04 compute-0 sshd-session[110073]: Connection closed by 192.168.122.30 port 44396
Dec 13 07:20:04 compute-0 sshd-session[110070]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:20:04 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 13 07:20:04 compute-0 systemd[1]: session-40.scope: Consumed 16.811s CPU time.
Dec 13 07:20:04 compute-0 systemd-logind[745]: Session 40 logged out. Waiting for processes to exit.
Dec 13 07:20:04 compute-0 systemd-logind[745]: Removed session 40.
Dec 13 07:20:05 compute-0 ceph-mon[74928]: pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:06 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 13 07:20:06 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 13 07:20:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:07 compute-0 ceph-mon[74928]: 9.0 scrub starts
Dec 13 07:20:07 compute-0 ceph-mon[74928]: 9.0 scrub ok
Dec 13 07:20:07 compute-0 ceph-mon[74928]: pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:08 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 13 07:20:08 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 13 07:20:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:20:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:20:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:20:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:20:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:20:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:20:09 compute-0 ceph-mon[74928]: 9.2 scrub starts
Dec 13 07:20:09 compute-0 ceph-mon[74928]: 9.2 scrub ok
Dec 13 07:20:09 compute-0 ceph-mon[74928]: pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:09 compute-0 sshd-session[114611]: Accepted publickey for zuul from 192.168.122.30 port 42736 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:20:09 compute-0 systemd-logind[745]: New session 41 of user zuul.
Dec 13 07:20:09 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 13 07:20:09 compute-0 sshd-session[114611]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:20:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:10 compute-0 sudo[114765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvckrtszlwmlnasuhvsphyhtyyiiikk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610409.9894147-22-29980652054432/AnsiballZ_file.py'
Dec 13 07:20:10 compute-0 sudo[114765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:10 compute-0 python3.9[114767]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:10 compute-0 sudo[114765]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:11 compute-0 sudo[114917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgrgfivfhpkzobiuhrsdmjtdpecjynvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610410.6695764-34-46421126275210/AnsiballZ_stat.py'
Dec 13 07:20:11 compute-0 sudo[114917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:11 compute-0 python3.9[114919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:11 compute-0 sudo[114917]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:11 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 13 07:20:11 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 13 07:20:11 compute-0 sudo[114995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnxjzvfzgpspqzeortgtqvuybagvwblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610410.6695764-34-46421126275210/AnsiballZ_file.py'
Dec 13 07:20:11 compute-0 sudo[114995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:11 compute-0 ceph-mon[74928]: pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:11 compute-0 python3.9[114997]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:11 compute-0 sudo[114995]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:11 compute-0 sshd-session[114614]: Connection closed by 192.168.122.30 port 42736
Dec 13 07:20:11 compute-0 sshd-session[114611]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:20:11 compute-0 systemd-logind[745]: Session 41 logged out. Waiting for processes to exit.
Dec 13 07:20:11 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 13 07:20:11 compute-0 systemd[1]: session-41.scope: Consumed 1.239s CPU time.
Dec 13 07:20:11 compute-0 systemd-logind[745]: Removed session 41.
Dec 13 07:20:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:12 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 13 07:20:12 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 13 07:20:12 compute-0 ceph-mon[74928]: 9.a scrub starts
Dec 13 07:20:12 compute-0 ceph-mon[74928]: 9.a scrub ok
Dec 13 07:20:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:13 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 13 07:20:13 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 13 07:20:13 compute-0 ceph-mon[74928]: pgmap v303: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:13 compute-0 ceph-mon[74928]: 9.4 scrub starts
Dec 13 07:20:13 compute-0 ceph-mon[74928]: 9.4 scrub ok
Dec 13 07:20:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 13 07:20:14 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 13 07:20:14 compute-0 ceph-mon[74928]: 9.1a scrub starts
Dec 13 07:20:14 compute-0 ceph-mon[74928]: 9.1a scrub ok
Dec 13 07:20:14 compute-0 ceph-mon[74928]: 9.b scrub starts
Dec 13 07:20:14 compute-0 ceph-mon[74928]: 9.b scrub ok
Dec 13 07:20:15 compute-0 ceph-mon[74928]: pgmap v304: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:16 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 13 07:20:16 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 13 07:20:16 compute-0 ceph-mon[74928]: 9.5 scrub starts
Dec 13 07:20:16 compute-0 sshd-session[115022]: Accepted publickey for zuul from 192.168.122.30 port 42742 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:20:16 compute-0 systemd-logind[745]: New session 42 of user zuul.
Dec 13 07:20:16 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 13 07:20:16 compute-0 sshd-session[115022]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:20:17 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 13 07:20:17 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 13 07:20:17 compute-0 python3.9[115175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:20:17 compute-0 ceph-mon[74928]: pgmap v305: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:17 compute-0 ceph-mon[74928]: 9.5 scrub ok
Dec 13 07:20:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:18 compute-0 sudo[115329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcgmsbwbajinrevjuscyzcvpbjdiudqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610417.8577325-33-211352845827815/AnsiballZ_file.py'
Dec 13 07:20:18 compute-0 sudo[115329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:18 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 13 07:20:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:18 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 13 07:20:18 compute-0 python3.9[115331]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:18 compute-0 sudo[115329]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:18 compute-0 ceph-mon[74928]: 9.12 scrub starts
Dec 13 07:20:18 compute-0 ceph-mon[74928]: 9.12 scrub ok
Dec 13 07:20:18 compute-0 sudo[115504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpcrkfhzunztmnqekuijdqmkblgwlaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610418.4533353-41-61280534617306/AnsiballZ_stat.py'
Dec 13 07:20:18 compute-0 sudo[115504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:18 compute-0 python3.9[115506]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:18 compute-0 sudo[115504]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:19 compute-0 sudo[115582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebbdlphjlsgqnxqjaaxudsbribbevuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610418.4533353-41-61280534617306/AnsiballZ_file.py'
Dec 13 07:20:19 compute-0 sudo[115582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:19 compute-0 python3.9[115584]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.nsewnowd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:19 compute-0 sudo[115582]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 13 07:20:19 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 13 07:20:19 compute-0 ceph-mon[74928]: 9.10 scrub starts
Dec 13 07:20:19 compute-0 ceph-mon[74928]: pgmap v306: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:19 compute-0 ceph-mon[74928]: 9.10 scrub ok
Dec 13 07:20:19 compute-0 sudo[115734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaocrobwrlvvpcohslgxqcewrfagdvxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610419.581769-61-28907124689337/AnsiballZ_stat.py'
Dec 13 07:20:19 compute-0 sudo[115734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:19 compute-0 python3.9[115736]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:19 compute-0 sudo[115734]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:20 compute-0 sudo[115812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftxxqpyskwafaympeetciptzigogvopy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610419.581769-61-28907124689337/AnsiballZ_file.py'
Dec 13 07:20:20 compute-0 sudo[115812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:20 compute-0 python3.9[115814]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.z69g7aew recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:20 compute-0 sudo[115812]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:20 compute-0 sudo[115964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwcfqkhppmpwlrlcxuszbnxsnqjkbgve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610420.39464-74-275331122104385/AnsiballZ_file.py'
Dec 13 07:20:20 compute-0 sudo[115964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:20 compute-0 ceph-mon[74928]: 9.1f scrub starts
Dec 13 07:20:20 compute-0 ceph-mon[74928]: 9.1f scrub ok
Dec 13 07:20:20 compute-0 python3.9[115966]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:20:20 compute-0 sudo[115964]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:21 compute-0 sudo[116116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqjlimcxufzeiwjseyjfjtwrdxosufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610420.8692715-82-14691284579223/AnsiballZ_stat.py'
Dec 13 07:20:21 compute-0 sudo[116116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:21 compute-0 python3.9[116118]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:21 compute-0 sudo[116116]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:21 compute-0 sudo[116196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihphxtfbcomrbgjrssvresnixkywkabo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610420.8692715-82-14691284579223/AnsiballZ_file.py'
Dec 13 07:20:21 compute-0 sudo[116196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:21 compute-0 sudo[116194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:20:21 compute-0 sudo[116194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:21 compute-0 sudo[116194]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:21 compute-0 sudo[116222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:20:21 compute-0 sudo[116222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:21 compute-0 python3.9[116213]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:20:21 compute-0 sudo[116196]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:21 compute-0 ceph-mon[74928]: pgmap v307: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:21 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 13 07:20:21 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 13 07:20:21 compute-0 sudo[116425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wviemltydjhrhbjdothoxfdvtsvwrgad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610421.6535366-82-170102813738940/AnsiballZ_stat.py'
Dec 13 07:20:21 compute-0 sudo[116425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:21 compute-0 sudo[116222]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:20:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:20:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:20:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:20:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:20:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:20:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:20:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:20:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:20:21 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:20:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:20:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:20:21 compute-0 sudo[116428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:20:21 compute-0 sudo[116428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:21 compute-0 sudo[116428]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:21 compute-0 sudo[116453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:20:21 compute-0 sudo[116453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:21 compute-0 python3.9[116427]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:22 compute-0 sudo[116425]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:22 compute-0 sudo[116572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfgjwjenqdxixvkbidtqzkncxpolxld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610421.6535366-82-170102813738940/AnsiballZ_file.py'
Dec 13 07:20:22 compute-0 sudo[116572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.164212834 +0000 UTC m=+0.032276287 container create 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:20:22 compute-0 systemd[1]: Started libpod-conmon-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope.
Dec 13 07:20:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.220605291 +0000 UTC m=+0.088668764 container init 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.226085188 +0000 UTC m=+0.094148641 container start 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.227470802 +0000 UTC m=+0.095534255 container attach 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:20:22 compute-0 stupefied_wilson[116579]: 167 167
Dec 13 07:20:22 compute-0 systemd[1]: libpod-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope: Deactivated successfully.
Dec 13 07:20:22 compute-0 conmon[116579]: conmon 0d53a71348fd9eb37859 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope/container/memory.events
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.231231706 +0000 UTC m=+0.099295159 container died 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4406dc2f3f8b1530d13c68d0597c3c83d4feb7a6b3e770a0b1b7e7d6862b7e18-merged.mount: Deactivated successfully.
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.15218784 +0000 UTC m=+0.020251313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:20:22 compute-0 podman[116541]: 2025-12-13 07:20:22.252154304 +0000 UTC m=+0.120217757 container remove 0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:20:22 compute-0 systemd[1]: libpod-conmon-0d53a71348fd9eb37859951110804d823c191b35931eb5fc94b5fe93b1d627fd.scope: Deactivated successfully.
Dec 13 07:20:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:22 compute-0 python3.9[116575]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.36550129 +0000 UTC m=+0.027614212 container create e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:20:22 compute-0 sudo[116572]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:22 compute-0 systemd[1]: Started libpod-conmon-e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9.scope.
Dec 13 07:20:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.42477415 +0000 UTC m=+0.086887093 container init e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.43139573 +0000 UTC m=+0.093508654 container start e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.432673651 +0000 UTC m=+0.094786594 container attach e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.354371394 +0000 UTC m=+0.016484337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:20:22 compute-0 ceph-mon[74928]: 9.11 scrub starts
Dec 13 07:20:22 compute-0 ceph-mon[74928]: 9.11 scrub ok
Dec 13 07:20:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:20:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:20:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:20:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:20:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:20:22 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:20:22 compute-0 sudo[116776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxrgavuzzlmuvblcofirtzvqhvfuoofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610422.4829333-105-147114635493450/AnsiballZ_file.py'
Dec 13 07:20:22 compute-0 sudo[116776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:22 compute-0 vigorous_goodall[116622]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:20:22 compute-0 vigorous_goodall[116622]: --> All data devices are unavailable
Dec 13 07:20:22 compute-0 systemd[1]: libpod-e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9.scope: Deactivated successfully.
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.78288355 +0000 UTC m=+0.444996483 container died e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 07:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-74ce3842cae9bb91cc4f6370819e61bba4565a677c3f12798e62185343b55d1c-merged.mount: Deactivated successfully.
Dec 13 07:20:22 compute-0 podman[116601]: 2025-12-13 07:20:22.808860452 +0000 UTC m=+0.470973375 container remove e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:20:22 compute-0 python3.9[116778]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:22 compute-0 sudo[116453]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:22 compute-0 systemd[1]: libpod-conmon-e95f2efaa55ef3c4e96c20a022eff263c555bb241559f16c872d7f62d24197e9.scope: Deactivated successfully.
Dec 13 07:20:22 compute-0 sudo[116776]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:22 compute-0 sudo[116798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:20:22 compute-0 sudo[116798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:22 compute-0 sudo[116798]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:22 compute-0 sudo[116841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:20:22 compute-0 sudo[116841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.146482804 +0000 UTC m=+0.029413886 container create e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:20:23 compute-0 systemd[1]: Started libpod-conmon-e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2.scope.
Dec 13 07:20:23 compute-0 sudo[117017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoeniljodrjkxtizylozumqjhandxzwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610422.9689353-113-52682451921614/AnsiballZ_stat.py'
Dec 13 07:20:23 compute-0 sudo[117017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.200989655 +0000 UTC m=+0.083920757 container init e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.205601004 +0000 UTC m=+0.088532096 container start e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.207032683 +0000 UTC m=+0.089963775 container attach e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:20:23 compute-0 laughing_bhaskara[117022]: 167 167
Dec 13 07:20:23 compute-0 systemd[1]: libpod-e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2.scope: Deactivated successfully.
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.210233582 +0000 UTC m=+0.093164664 container died e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74922e257878a8ed47770f394699bff145159dca6ff947d1dc20422ddf44b79-merged.mount: Deactivated successfully.
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.230236425 +0000 UTC m=+0.113167507 container remove e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 07:20:23 compute-0 podman[116980]: 2025-12-13 07:20:23.135052954 +0000 UTC m=+0.017984046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:20:23 compute-0 systemd[1]: libpod-conmon-e3fe181cd7228b4b333b516f750ca1b7d75c99be2dd7367bf49c6cbe7bc6d8d2.scope: Deactivated successfully.
Dec 13 07:20:23 compute-0 python3.9[117024]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.344694737 +0000 UTC m=+0.028005900 container create 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:20:23 compute-0 systemd[1]: Started libpod-conmon-2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52.scope.
Dec 13 07:20:23 compute-0 sudo[117017]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.406636602 +0000 UTC m=+0.089947786 container init 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.413091439 +0000 UTC m=+0.096402603 container start 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.414899839 +0000 UTC m=+0.098211003 container attach 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.333426 +0000 UTC m=+0.016737184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:20:23 compute-0 sudo[117137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wstcwisaxfjelhwqnzvrgarwgjamsojh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610422.9689353-113-52682451921614/AnsiballZ_file.py'
Dec 13 07:20:23 compute-0 sudo[117137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:23 compute-0 ceph-mon[74928]: pgmap v308: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:23 compute-0 funny_albattani[117059]: {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:     "0": [
Dec 13 07:20:23 compute-0 funny_albattani[117059]:         {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "devices": [
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "/dev/loop3"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             ],
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_name": "ceph_lv0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_size": "21470642176",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "name": "ceph_lv0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "tags": {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cluster_name": "ceph",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.crush_device_class": "",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.encrypted": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.objectstore": "bluestore",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osd_id": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.type": "block",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.vdo": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.with_tpm": "0"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             },
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "type": "block",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "vg_name": "ceph_vg0"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:         }
Dec 13 07:20:23 compute-0 funny_albattani[117059]:     ],
Dec 13 07:20:23 compute-0 funny_albattani[117059]:     "1": [
Dec 13 07:20:23 compute-0 funny_albattani[117059]:         {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "devices": [
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "/dev/loop4"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             ],
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_name": "ceph_lv1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_size": "21470642176",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "name": "ceph_lv1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "tags": {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cluster_name": "ceph",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.crush_device_class": "",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.encrypted": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.objectstore": "bluestore",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osd_id": "1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.type": "block",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.vdo": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.with_tpm": "0"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             },
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "type": "block",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "vg_name": "ceph_vg1"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:         }
Dec 13 07:20:23 compute-0 funny_albattani[117059]:     ],
Dec 13 07:20:23 compute-0 funny_albattani[117059]:     "2": [
Dec 13 07:20:23 compute-0 funny_albattani[117059]:         {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "devices": [
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "/dev/loop5"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             ],
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_name": "ceph_lv2",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_size": "21470642176",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "name": "ceph_lv2",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "tags": {
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.cluster_name": "ceph",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.crush_device_class": "",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.encrypted": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.objectstore": "bluestore",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osd_id": "2",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.type": "block",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.vdo": "0",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:                 "ceph.with_tpm": "0"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             },
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "type": "block",
Dec 13 07:20:23 compute-0 funny_albattani[117059]:             "vg_name": "ceph_vg2"
Dec 13 07:20:23 compute-0 funny_albattani[117059]:         }
Dec 13 07:20:23 compute-0 funny_albattani[117059]:     ]
Dec 13 07:20:23 compute-0 funny_albattani[117059]: }
Dec 13 07:20:23 compute-0 systemd[1]: libpod-2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52.scope: Deactivated successfully.
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.651666321 +0000 UTC m=+0.334977484 container died 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 07:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c90511c1fb0c9f8e4a075c8c8af697bed1aff4070ad1da070477e306e104b0f-merged.mount: Deactivated successfully.
Dec 13 07:20:23 compute-0 podman[117044]: 2025-12-13 07:20:23.674339211 +0000 UTC m=+0.357650374 container remove 2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:20:23 compute-0 systemd[1]: libpod-conmon-2c98c968aa8d24a898164fc00c447eea8b5a4db74751ae83bcb3745f9fd4fd52.scope: Deactivated successfully.
Dec 13 07:20:23 compute-0 python3.9[117139]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:23 compute-0 sudo[117137]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:23 compute-0 sudo[116841]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:23 compute-0 sudo[117154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:20:23 compute-0 sudo[117154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:23 compute-0 sudo[117154]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:23 compute-0 sudo[117203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:20:23 compute-0 sudo[117203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:24 compute-0 sudo[117370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjwxvypzvupledhqdbxfeftlcgszskkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610423.8275073-125-147314341482145/AnsiballZ_stat.py'
Dec 13 07:20:24 compute-0 sudo[117370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.034517261 +0000 UTC m=+0.032380604 container create 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:20:24 compute-0 systemd[1]: Started libpod-conmon-345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d.scope.
Dec 13 07:20:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.087799892 +0000 UTC m=+0.085663245 container init 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.092507853 +0000 UTC m=+0.090371195 container start 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.093636383 +0000 UTC m=+0.091499725 container attach 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:20:24 compute-0 goofy_mclean[117380]: 167 167
Dec 13 07:20:24 compute-0 systemd[1]: libpod-345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d.scope: Deactivated successfully.
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.096495546 +0000 UTC m=+0.094358898 container died 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce43c3aa87fc05cef6a04f151f9cd871aeb4a02455d11670c2a28e25ded39aea-merged.mount: Deactivated successfully.
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.11469652 +0000 UTC m=+0.112559862 container remove 345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:20:24 compute-0 podman[117347]: 2025-12-13 07:20:24.021589084 +0000 UTC m=+0.019452446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:20:24 compute-0 systemd[1]: libpod-conmon-345616dd7924107a0dc40779278d8e649fe980c1ac6eba21d1ce32afa4d3cd1d.scope: Deactivated successfully.
Dec 13 07:20:24 compute-0 python3.9[117374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:24 compute-0 sudo[117370]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.228199199 +0000 UTC m=+0.028717884 container create 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:20:24 compute-0 systemd[1]: Started libpod-conmon-64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3.scope.
Dec 13 07:20:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.274379602 +0000 UTC m=+0.074898307 container init 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.280509646 +0000 UTC m=+0.081028331 container start 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.283774325 +0000 UTC m=+0.084293020 container attach 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 07:20:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.216487375 +0000 UTC m=+0.017006081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:20:24 compute-0 sudo[117496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdltoodjrigpfdivwwopdoqmrlofopk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610423.8275073-125-147314341482145/AnsiballZ_file.py'
Dec 13 07:20:24 compute-0 sudo[117496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:24 compute-0 python3.9[117498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:24 compute-0 sudo[117496]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:24 compute-0 lvm[117645]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:20:24 compute-0 lvm[117645]: VG ceph_vg0 finished
Dec 13 07:20:24 compute-0 lvm[117648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:20:24 compute-0 lvm[117648]: VG ceph_vg1 finished
Dec 13 07:20:24 compute-0 lvm[117651]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:20:24 compute-0 lvm[117651]: VG ceph_vg2 finished
Dec 13 07:20:24 compute-0 quirky_lalande[117438]: {}
Dec 13 07:20:24 compute-0 systemd[1]: libpod-64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3.scope: Deactivated successfully.
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.914725886 +0000 UTC m=+0.715244571 container died 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 07:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b6b528489ee78af0f63c35341a9a420c7411a5875eb22ce8b7235f4025ab9d0-merged.mount: Deactivated successfully.
Dec 13 07:20:24 compute-0 podman[117404]: 2025-12-13 07:20:24.938732722 +0000 UTC m=+0.739251407 container remove 64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:20:24 compute-0 systemd[1]: libpod-conmon-64708052e515cd9c88f6d2009d264988e74a601058515385574400142a25a9f3.scope: Deactivated successfully.
Dec 13 07:20:24 compute-0 sudo[117203]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:20:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:20:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:20:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:20:25 compute-0 sudo[117663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:20:25 compute-0 sudo[117663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:20:25 compute-0 sudo[117663]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:25 compute-0 sudo[117761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoqlcgwbrhgbgsfutuofymvjtwyflbnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610424.6867447-137-12678692451884/AnsiballZ_systemd.py'
Dec 13 07:20:25 compute-0 sudo[117761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:25 compute-0 python3.9[117763]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:20:25 compute-0 systemd[1]: Reloading.
Dec 13 07:20:25 compute-0 systemd-sysv-generator[117788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:20:25 compute-0 systemd-rc-local-generator[117784]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:20:25 compute-0 ceph-mon[74928]: pgmap v309: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:25 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:20:25 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:20:25 compute-0 sudo[117761]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:25 compute-0 sudo[117952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvftafdfaunypolcuwcwgdcweqgxfraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610425.7675004-145-253818131085621/AnsiballZ_stat.py'
Dec 13 07:20:25 compute-0 sudo[117952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:26 compute-0 python3.9[117954]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:26 compute-0 sudo[117952]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:26 compute-0 sudo[118030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvanyjolhjvwbjipjrzmepkoamaqfocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610425.7675004-145-253818131085621/AnsiballZ_file.py'
Dec 13 07:20:26 compute-0 sudo[118030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:26 compute-0 python3.9[118032]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:26 compute-0 sudo[118030]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:26 compute-0 sudo[118182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphstjwdpgiyibbabktktgsgfkwowbip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610426.5976286-157-57422717445559/AnsiballZ_stat.py'
Dec 13 07:20:26 compute-0 sudo[118182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:26 compute-0 python3.9[118184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:26 compute-0 sudo[118182]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:27 compute-0 sudo[118260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzpycriabbksfvozqqouogxuovvypnqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610426.5976286-157-57422717445559/AnsiballZ_file.py'
Dec 13 07:20:27 compute-0 sudo[118260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:27 compute-0 python3.9[118262]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:27 compute-0 sudo[118260]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:27 compute-0 ceph-mon[74928]: pgmap v310: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:27 compute-0 sudo[118412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnoqttrgjxmapzfrykxcgeualhyzvdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610427.4454105-169-231331943000485/AnsiballZ_systemd.py'
Dec 13 07:20:27 compute-0 sudo[118412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:27 compute-0 python3.9[118414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:20:27 compute-0 systemd[1]: Reloading.
Dec 13 07:20:27 compute-0 systemd-rc-local-generator[118435]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:20:27 compute-0 systemd-sysv-generator[118438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:20:28 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 07:20:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 07:20:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 07:20:28 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 07:20:28 compute-0 sudo[118412]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:28 compute-0 python3.9[118605]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:20:28 compute-0 network[118622]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:20:28 compute-0 network[118623]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:20:28 compute-0 network[118624]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:20:29 compute-0 ceph-mon[74928]: pgmap v311: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:31 compute-0 sudo[118884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbtyyrblgbhtzdcghlgywnufodkhwjfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610431.2381656-195-64200693745392/AnsiballZ_stat.py'
Dec 13 07:20:31 compute-0 sudo[118884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:31 compute-0 ceph-mon[74928]: pgmap v312: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:31 compute-0 python3.9[118886]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:31 compute-0 sudo[118884]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:31 compute-0 sudo[118962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astxbtmftijlfbaggychzebqwwrjwmnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610431.2381656-195-64200693745392/AnsiballZ_file.py'
Dec 13 07:20:31 compute-0 sudo[118962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:31 compute-0 python3.9[118964]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:32 compute-0 sudo[118962]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:32 compute-0 sudo[119114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovosmbxujabmatpwjlneowwwoakgharl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610432.1555543-208-105554963880067/AnsiballZ_file.py'
Dec 13 07:20:32 compute-0 sudo[119114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:32 compute-0 python3.9[119116]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:32 compute-0 sudo[119114]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:32 compute-0 sudo[119266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orqyhmzkwpbqdhwxrlgmqxynetgtxtuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610432.6655824-216-108606579280279/AnsiballZ_stat.py'
Dec 13 07:20:32 compute-0 sudo[119266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:33 compute-0 python3.9[119268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:33 compute-0 sudo[119266]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:33 compute-0 sudo[119344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyisxthlauegtowqxoexoczcnvfodaau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610432.6655824-216-108606579280279/AnsiballZ_file.py'
Dec 13 07:20:33 compute-0 sudo[119344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:33 compute-0 python3.9[119346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:33 compute-0 sudo[119344]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:33 compute-0 ceph-mon[74928]: pgmap v313: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:34 compute-0 sudo[119496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmsvlqsuwzazkjbkjqbqctwlzyiojaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610433.6573863-231-267741989200906/AnsiballZ_timezone.py'
Dec 13 07:20:34 compute-0 sudo[119496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:34 compute-0 python3.9[119498]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 07:20:34 compute-0 systemd[1]: Starting Time & Date Service...
Dec 13 07:20:34 compute-0 systemd[1]: Started Time & Date Service.
Dec 13 07:20:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:34 compute-0 sudo[119496]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:34 compute-0 sudo[119652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlfenfubwowbnzqvdfocrpthdgtpwudy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610434.4492242-240-152893529985698/AnsiballZ_file.py'
Dec 13 07:20:34 compute-0 sudo[119652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:34 compute-0 python3.9[119654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:34 compute-0 sudo[119652]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:35 compute-0 sudo[119804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aepdtmlsepbllzvzerzciyffgxqbnpan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610434.9296806-248-150744142179382/AnsiballZ_stat.py'
Dec 13 07:20:35 compute-0 sudo[119804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:35 compute-0 python3.9[119806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:35 compute-0 sudo[119804]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:35 compute-0 sudo[119882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktrjxjmexuscugxkalxtlgddfenqumtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610434.9296806-248-150744142179382/AnsiballZ_file.py'
Dec 13 07:20:35 compute-0 sudo[119882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:35 compute-0 ceph-mon[74928]: pgmap v314: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:35 compute-0 python3.9[119884]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:35 compute-0 sudo[119882]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:35 compute-0 sudo[120034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvhwarpzyxciouooevwsmzbiiuqfxvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610435.7635076-260-111632802923144/AnsiballZ_stat.py'
Dec 13 07:20:35 compute-0 sudo[120034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:36 compute-0 python3.9[120036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:36 compute-0 sudo[120034]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:36 compute-0 sudo[120112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekesxgxefqxcgoxqgizyvrbmyooqrird ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610435.7635076-260-111632802923144/AnsiballZ_file.py'
Dec 13 07:20:36 compute-0 sudo[120112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:36 compute-0 python3.9[120114]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._xxem502 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:36 compute-0 sudo[120112]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:36 compute-0 sudo[120264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dovsfsacigncfjgboicxffqrbsdiqfqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610436.573667-272-250952766571450/AnsiballZ_stat.py'
Dec 13 07:20:36 compute-0 sudo[120264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:36 compute-0 python3.9[120266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:36 compute-0 sudo[120264]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:37 compute-0 sudo[120342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfwqgobcvpbvnzshgsfnywiihxufahq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610436.573667-272-250952766571450/AnsiballZ_file.py'
Dec 13 07:20:37 compute-0 sudo[120342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:37 compute-0 python3.9[120344]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:37 compute-0 sudo[120342]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:37 compute-0 ceph-mon[74928]: pgmap v315: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:37 compute-0 sudo[120494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzqbrvgtmjkoeeotvvusfyoecgvxkhel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610437.3920953-285-14157092459713/AnsiballZ_command.py'
Dec 13 07:20:37 compute-0 sudo[120494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:37 compute-0 python3.9[120496]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:20:37 compute-0 sudo[120494]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:20:38
Dec 13 07:20:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:20:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:20:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Dec 13 07:20:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:20:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:38 compute-0 sudo[120647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iywxoedagbakqxevbpwikhwaylffabxo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610438.0102925-293-55097299125675/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 07:20:38 compute-0 sudo[120647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:38 compute-0 python3[120649]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 07:20:38 compute-0 sudo[120647]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:38 compute-0 sudo[120799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkrktrbbralnrkejvspkfssfnstxltwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610438.6474175-301-212257708856647/AnsiballZ_stat.py'
Dec 13 07:20:38 compute-0 sudo[120799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:39 compute-0 python3.9[120801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:39 compute-0 sudo[120799]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:20:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:20:39 compute-0 sudo[120877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchoggnlkpvhlmaftfadnkvdjoauaagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610438.6474175-301-212257708856647/AnsiballZ_file.py'
Dec 13 07:20:39 compute-0 sudo[120877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:39 compute-0 python3.9[120879]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:39 compute-0 sudo[120877]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:39 compute-0 ceph-mon[74928]: pgmap v316: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:39 compute-0 sudo[121029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwujdvmdaxdmbulytdlhrevvvgjiinph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610439.4928744-313-169152903090420/AnsiballZ_stat.py'
Dec 13 07:20:39 compute-0 sudo[121029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:39 compute-0 python3.9[121031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:39 compute-0 sudo[121029]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:40 compute-0 sudo[121107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahacapvrhaiugnulzukqvhutyunlrsih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610439.4928744-313-169152903090420/AnsiballZ_file.py'
Dec 13 07:20:40 compute-0 sudo[121107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:40 compute-0 python3.9[121109]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:40 compute-0 sudo[121107]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:40 compute-0 sudo[121259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwhmjlflotsezteemxyzrlrbbbwshraw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610440.343599-325-60877917687037/AnsiballZ_stat.py'
Dec 13 07:20:40 compute-0 sudo[121259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:40 compute-0 python3.9[121261]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:40 compute-0 sudo[121259]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:40 compute-0 sudo[121337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arlqohzyfgpvimhmvflmxiyoomcxywcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610440.343599-325-60877917687037/AnsiballZ_file.py'
Dec 13 07:20:40 compute-0 sudo[121337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:41 compute-0 python3.9[121339]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:41 compute-0 sudo[121337]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:41 compute-0 sudo[121489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juhlvfrtuqbateuylxgagdawldudmcao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610441.1692858-337-122518746777970/AnsiballZ_stat.py'
Dec 13 07:20:41 compute-0 sudo[121489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:41 compute-0 python3.9[121491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:41 compute-0 sudo[121489]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:41 compute-0 ceph-mon[74928]: pgmap v317: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:41 compute-0 sudo[121567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrfnlrxxtlpurwtjnmaheayazshnoukb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610441.1692858-337-122518746777970/AnsiballZ_file.py'
Dec 13 07:20:41 compute-0 sudo[121567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:41 compute-0 python3.9[121569]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:41 compute-0 sudo[121567]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:42 compute-0 sudo[121719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrfaomqmdcvseyqzuswlkiknyyiqzpwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610441.9734006-349-246675363509078/AnsiballZ_stat.py'
Dec 13 07:20:42 compute-0 sudo[121719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:42 compute-0 python3.9[121721]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:42 compute-0 sudo[121719]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:42 compute-0 sshd-session[71242]: Received disconnect from 192.168.25.167 port 56916:11: disconnected by user
Dec 13 07:20:42 compute-0 sshd-session[71242]: Disconnected from user zuul 192.168.25.167 port 56916
Dec 13 07:20:42 compute-0 sshd-session[71239]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:20:42 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 07:20:42 compute-0 systemd[1]: session-17.scope: Consumed 1min 11.719s CPU time.
Dec 13 07:20:42 compute-0 systemd-logind[745]: Session 17 logged out. Waiting for processes to exit.
Dec 13 07:20:42 compute-0 systemd-logind[745]: Removed session 17.
Dec 13 07:20:42 compute-0 sudo[121797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novitwpwpnvszavyfahhytwkateznijq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610441.9734006-349-246675363509078/AnsiballZ_file.py'
Dec 13 07:20:42 compute-0 sudo[121797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:42 compute-0 python3.9[121799]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:42 compute-0 ceph-mon[74928]: pgmap v318: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:42 compute-0 sudo[121797]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:43 compute-0 sudo[121949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylqnajoenwujxadsradaptojpngskikj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610442.8635051-362-47286490907756/AnsiballZ_command.py'
Dec 13 07:20:43 compute-0 sudo[121949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:43 compute-0 python3.9[121951]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:20:43 compute-0 sudo[121949]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:43 compute-0 sudo[122104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roybvwthuugvncbnofipdjuwcynabpfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610443.3322742-370-46991465610621/AnsiballZ_blockinfile.py'
Dec 13 07:20:43 compute-0 sudo[122104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:43 compute-0 python3.9[122106]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:43 compute-0 sudo[122104]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:44 compute-0 sudo[122256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswbccqwhklrzhupggtjjqysokyywoty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610444.130418-379-144452266940853/AnsiballZ_file.py'
Dec 13 07:20:44 compute-0 sudo[122256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:44 compute-0 python3.9[122258]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:44 compute-0 sudo[122256]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:44 compute-0 sudo[122408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgwiwojxfdtrmcknbdzzmoihajvufler ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610444.5810132-379-280072726790888/AnsiballZ_file.py'
Dec 13 07:20:44 compute-0 sudo[122408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:44 compute-0 python3.9[122410]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:44 compute-0 sudo[122408]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:45 compute-0 ceph-mon[74928]: pgmap v319: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:45 compute-0 sudo[122560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnwfthjtpwvsvkbnjgxljlmqsdahjcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610445.0515864-394-185214251447180/AnsiballZ_mount.py'
Dec 13 07:20:45 compute-0 sudo[122560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:45 compute-0 python3.9[122562]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 07:20:45 compute-0 sudo[122560]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:45 compute-0 sudo[122712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkmiqonztbsgtubvbbthmqblpbzqgwcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610445.7395384-394-193603658050989/AnsiballZ_mount.py'
Dec 13 07:20:45 compute-0 sudo[122712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:46 compute-0 python3.9[122714]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 07:20:46 compute-0 sudo[122712]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:46 compute-0 sshd-session[115025]: Connection closed by 192.168.122.30 port 42742
Dec 13 07:20:46 compute-0 sshd-session[115022]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:20:46 compute-0 systemd-logind[745]: Session 42 logged out. Waiting for processes to exit.
Dec 13 07:20:46 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 13 07:20:46 compute-0 systemd[1]: session-42.scope: Consumed 21.283s CPU time.
Dec 13 07:20:46 compute-0 systemd-logind[745]: Removed session 42.
Dec 13 07:20:47 compute-0 ceph-mon[74928]: pgmap v320: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:20:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:20:49 compute-0 ceph-mon[74928]: pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:51 compute-0 ceph-mon[74928]: pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:52 compute-0 sshd-session[122739]: Accepted publickey for zuul from 192.168.122.30 port 49110 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:20:52 compute-0 systemd-logind[745]: New session 43 of user zuul.
Dec 13 07:20:52 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 13 07:20:52 compute-0 sshd-session[122739]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:20:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:52 compute-0 sudo[122892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brtxzqgrjojxaqupefuhvjpsupmubyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610452.473712-16-71345526448291/AnsiballZ_tempfile.py'
Dec 13 07:20:52 compute-0 sudo[122892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:52 compute-0 python3.9[122894]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 07:20:52 compute-0 sudo[122892]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:53 compute-0 ceph-mon[74928]: pgmap v323: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:53 compute-0 sudo[123044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ianjnyqnoxvrktvtjpgzrxkzwbqsndxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610453.1026556-28-135402886067852/AnsiballZ_stat.py'
Dec 13 07:20:53 compute-0 sudo[123044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:53 compute-0 python3.9[123046]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:20:53 compute-0 sudo[123044]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:53 compute-0 sudo[123198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkstyqesfadxlrzdxafmzdmiszcakjnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610453.7052698-36-42545203288783/AnsiballZ_slurp.py'
Dec 13 07:20:53 compute-0 sudo[123198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:54 compute-0 python3.9[123200]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 13 07:20:54 compute-0 sudo[123198]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:54 compute-0 sudo[123350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzdlwabliedjrcbbzkasjyjdxjahwhkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610454.288385-44-123153932086596/AnsiballZ_stat.py'
Dec 13 07:20:54 compute-0 sudo[123350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:54 compute-0 python3.9[123352]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.6xzeyhf5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:20:54 compute-0 sudo[123350]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:55 compute-0 sudo[123475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgkwtzbthcwyajcbaxmoiovdhhvzivhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610454.288385-44-123153932086596/AnsiballZ_copy.py'
Dec 13 07:20:55 compute-0 sudo[123475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:55 compute-0 python3.9[123477]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.6xzeyhf5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610454.288385-44-123153932086596/.source.6xzeyhf5 _original_basename=.pm6pzats follow=False checksum=c1ddfbd914d066c629baab8f2d3e4b9f69b0d895 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:55 compute-0 sudo[123475]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:55 compute-0 ceph-mon[74928]: pgmap v324: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:55 compute-0 sudo[123627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnczklthasphovyxxemfcxgdvpozkoiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610455.2978435-59-175645987298569/AnsiballZ_setup.py'
Dec 13 07:20:55 compute-0 sudo[123627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:56 compute-0 python3.9[123629]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:20:56 compute-0 sudo[123627]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:56 compute-0 sudo[123779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htknndijxybzbinuxnklovyukhdhlhct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610456.2173-68-260357826163627/AnsiballZ_blockinfile.py'
Dec 13 07:20:56 compute-0 sudo[123779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:56 compute-0 python3.9[123781]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCekpfjOZMQHu4kGkMmbnPcCtz1ykBu18rwwghFZ6JdZNeLGT0geVZzeGTxx67o32Xucl5rndeaEtZvZfxTXM1W/3Z9ig0x1tTtqK2lTLjxcw4+AxChtq8Mt1LZKUi2MHVUdDkB8UwKvPPC6k5NFQRBu1jsX63zDiUCudXQlFm49OLA8BZh7VuZYlpOMnuiPC9cWsSAehEH4hmIdqlyl7xhfBn/4IId10yPH4Bev4qk4z212G730uw0ldn9RfPP2Batr31zKwOCUveVL5V48yK6VIj2O4uztbh6yagWlbqPwmUoYdvokyMVmONCStsc8BDSSaTmH7gv6cm1tfpfpKJlBo25kpuVocNQaaZB8/x71weojzujWfYBPfwbGARRkq9lgjdmyLJot9XdtcDkAKNeE6nzDo29nj1SpYzDYu2OrwI8RN9TLEQyXyUi80L4ELrI2WrVf5NwIvfG0ZKHurHxEDYcJKris+z3lCdPHRbw/D0HAhFZ6YnnViCeqLe+XL0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDlhQSLisbnaeA/5eqQ07vXPLvOWH+wLodInwcPHjCbq
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL+1SrJ/t+tkNcFtDd1R0f0/5owYzeRM7hR2TrpSEQtZk5y2BWR+htC7NOo7cYghMztLnyJaOIsNSp9NjO5UEBE=
                                              create=True mode=0644 path=/tmp/ansible.6xzeyhf5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:56 compute-0 sudo[123779]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:57 compute-0 sudo[123931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxzuhztkrdffmdemxfnfmflgmstonfvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610456.8057098-76-102509673256770/AnsiballZ_command.py'
Dec 13 07:20:57 compute-0 sudo[123931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:57 compute-0 python3.9[123933]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6xzeyhf5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:20:57 compute-0 sudo[123931]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:57 compute-0 ceph-mon[74928]: pgmap v325: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:57 compute-0 sudo[124085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirdbsfacmwepyxnjyexvimewxnxjmnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610457.4032533-84-108660180852177/AnsiballZ_file.py'
Dec 13 07:20:57 compute-0 sudo[124085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:20:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:20:57 compute-0 python3.9[124087]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6xzeyhf5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:20:57 compute-0 sudo[124085]: pam_unix(sudo:session): session closed for user root
Dec 13 07:20:58 compute-0 sshd-session[122742]: Connection closed by 192.168.122.30 port 49110
Dec 13 07:20:58 compute-0 sshd-session[122739]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:20:58 compute-0 systemd-logind[745]: Session 43 logged out. Waiting for processes to exit.
Dec 13 07:20:58 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 13 07:20:58 compute-0 systemd[1]: session-43.scope: Consumed 3.537s CPU time.
Dec 13 07:20:58 compute-0 systemd-logind[745]: Removed session 43.
Dec 13 07:20:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:20:59 compute-0 ceph-mon[74928]: pgmap v326: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:01 compute-0 ceph-mon[74928]: pgmap v327: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:03 compute-0 sshd-session[124112]: Accepted publickey for zuul from 192.168.122.30 port 35648 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:21:03 compute-0 systemd-logind[745]: New session 44 of user zuul.
Dec 13 07:21:03 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 13 07:21:03 compute-0 sshd-session[124112]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:21:03 compute-0 ceph-mon[74928]: pgmap v328: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:04 compute-0 python3.9[124265]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:21:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:04 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 07:21:04 compute-0 sudo[124421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwegttbthlvqtwbkcxgfihftimmvwfhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610464.3762891-32-118262087417265/AnsiballZ_systemd.py'
Dec 13 07:21:04 compute-0 sudo[124421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:05 compute-0 python3.9[124423]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 07:21:05 compute-0 sudo[124421]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:05 compute-0 ceph-mon[74928]: pgmap v329: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:05 compute-0 sudo[124575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwmzbukowcpeixteoocqygbhynbeplel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610465.2313457-40-130645781082021/AnsiballZ_systemd.py'
Dec 13 07:21:05 compute-0 sudo[124575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:05 compute-0 python3.9[124577]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:21:05 compute-0 sudo[124575]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:06 compute-0 sudo[124728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytmafvbzgudurqwmqyhgmnojebhalks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610465.8434896-49-239398390853288/AnsiballZ_command.py'
Dec 13 07:21:06 compute-0 sudo[124728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:06 compute-0 python3.9[124730]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:21:06 compute-0 sudo[124728]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:06 compute-0 sudo[124881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugbngzvdnidziectbmoaensavqpmwdpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610466.4823492-57-141606034209476/AnsiballZ_stat.py'
Dec 13 07:21:06 compute-0 sudo[124881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:06 compute-0 python3.9[124883]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:21:06 compute-0 sudo[124881]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:07 compute-0 ceph-mon[74928]: pgmap v330: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:07 compute-0 sudo[125033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubzilbwlfcphtcehfplkhbzqyjmosah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610467.2472563-66-76812479622307/AnsiballZ_file.py'
Dec 13 07:21:07 compute-0 sudo[125033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:07 compute-0 python3.9[125035]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:07 compute-0 sudo[125033]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:08 compute-0 sshd-session[124115]: Connection closed by 192.168.122.30 port 35648
Dec 13 07:21:08 compute-0 sshd-session[124112]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:21:08 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 13 07:21:08 compute-0 systemd[1]: session-44.scope: Consumed 2.828s CPU time.
Dec 13 07:21:08 compute-0 systemd-logind[745]: Session 44 logged out. Waiting for processes to exit.
Dec 13 07:21:08 compute-0 systemd-logind[745]: Removed session 44.
Dec 13 07:21:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:21:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:21:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:21:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:21:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:21:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:21:09 compute-0 ceph-mon[74928]: pgmap v331: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:11 compute-0 ceph-mon[74928]: pgmap v332: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:13 compute-0 sshd-session[125060]: Accepted publickey for zuul from 192.168.122.30 port 54014 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:21:13 compute-0 systemd-logind[745]: New session 45 of user zuul.
Dec 13 07:21:13 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 13 07:21:13 compute-0 sshd-session[125060]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:21:13 compute-0 ceph-mon[74928]: pgmap v333: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:14 compute-0 python3.9[125213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:21:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:14 compute-0 sudo[125367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukayilmnnvmwgkpitutudwxoeipvttjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610474.5147069-34-167762659595137/AnsiballZ_setup.py'
Dec 13 07:21:14 compute-0 sudo[125367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:14 compute-0 python3.9[125369]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:21:15 compute-0 sudo[125367]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:15 compute-0 ceph-mon[74928]: pgmap v334: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:15 compute-0 sudo[125451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkczlwnppximoxfdyssrkqxqjjjfest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610474.5147069-34-167762659595137/AnsiballZ_dnf.py'
Dec 13 07:21:15 compute-0 sudo[125451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:15 compute-0 python3.9[125453]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 07:21:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:16 compute-0 sudo[125451]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:17 compute-0 python3.9[125604]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:21:17 compute-0 ceph-mon[74928]: pgmap v335: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:18 compute-0 python3.9[125755]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 07:21:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:18 compute-0 python3.9[125905]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:21:19 compute-0 python3.9[126055]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:21:19 compute-0 ceph-mon[74928]: pgmap v336: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:19 compute-0 sshd-session[125063]: Connection closed by 192.168.122.30 port 54014
Dec 13 07:21:19 compute-0 sshd-session[125060]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:21:19 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 13 07:21:19 compute-0 systemd[1]: session-45.scope: Consumed 4.216s CPU time.
Dec 13 07:21:19 compute-0 systemd-logind[745]: Session 45 logged out. Waiting for processes to exit.
Dec 13 07:21:19 compute-0 systemd-logind[745]: Removed session 45.
Dec 13 07:21:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:21 compute-0 ceph-mon[74928]: pgmap v337: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:23 compute-0 ceph-mon[74928]: pgmap v338: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:24 compute-0 sshd-session[126080]: Accepted publickey for zuul from 192.168.122.30 port 48142 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:21:24 compute-0 systemd-logind[745]: New session 46 of user zuul.
Dec 13 07:21:24 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 13 07:21:24 compute-0 sshd-session[126080]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:21:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:25 compute-0 python3.9[126233]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:21:25 compute-0 sudo[126238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:21:25 compute-0 sudo[126238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:25 compute-0 sudo[126238]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:25 compute-0 sudo[126263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:21:25 compute-0 sudo[126263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: pgmap v339: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:25 compute-0 sudo[126263]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:21:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:21:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:21:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:21:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:21:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:21:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:21:25 compute-0 sudo[126342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:21:25 compute-0 sudo[126342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:25 compute-0 sudo[126342]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:25 compute-0 sudo[126367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:21:25 compute-0 sudo[126367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.836033277 +0000 UTC m=+0.027889418 container create b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:21:25 compute-0 systemd[1]: Started libpod-conmon-b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712.scope.
Dec 13 07:21:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.88463247 +0000 UTC m=+0.076488611 container init b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.889853771 +0000 UTC m=+0.081709912 container start b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.891197656 +0000 UTC m=+0.083053797 container attach b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:21:25 compute-0 vigilant_tesla[126416]: 167 167
Dec 13 07:21:25 compute-0 systemd[1]: libpod-b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712.scope: Deactivated successfully.
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.894074963 +0000 UTC m=+0.085931105 container died b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0694aef238ad530bcf796fd1600cf539a8042cac15455a179fb4cda2060505d-merged.mount: Deactivated successfully.
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.915096895 +0000 UTC m=+0.106953036 container remove b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:21:25 compute-0 podman[126401]: 2025-12-13 07:21:25.824286073 +0000 UTC m=+0.016142224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:21:25 compute-0 systemd[1]: libpod-conmon-b01596e13d94ca6005a426b7f7b67576bc81993b4519e970ee667b96e1d79712.scope: Deactivated successfully.
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.032949873 +0000 UTC m=+0.030269299 container create 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:21:26 compute-0 systemd[1]: Started libpod-conmon-9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685.scope.
Dec 13 07:21:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.096597634 +0000 UTC m=+0.093917071 container init 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.101933239 +0000 UTC m=+0.099252666 container start 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.103321528 +0000 UTC m=+0.100640975 container attach 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.019722147 +0000 UTC m=+0.017041584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:21:26 compute-0 sudo[126581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdmlqalghnvwiuyvpixumcsduyksgsdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610485.8912683-50-280672747746379/AnsiballZ_file.py'
Dec 13 07:21:26 compute-0 sudo[126581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:26 compute-0 python3.9[126583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:26 compute-0 sudo[126581]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:21:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:21:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:21:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:21:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:21:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:21:26 compute-0 sharp_ptolemy[126503]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:21:26 compute-0 sharp_ptolemy[126503]: --> All data devices are unavailable
Dec 13 07:21:26 compute-0 systemd[1]: libpod-9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685.scope: Deactivated successfully.
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.482160225 +0000 UTC m=+0.479479662 container died 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a517465f8f451bfcf4f8f9e98d8181ee46c5e2e579b07b0778a4ee623902ba93-merged.mount: Deactivated successfully.
Dec 13 07:21:26 compute-0 podman[126489]: 2025-12-13 07:21:26.507299904 +0000 UTC m=+0.504619331 container remove 9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:21:26 compute-0 systemd[1]: libpod-conmon-9172b6a4b744ec68e78384ad61cf6ec0dcabc4926b8f8053da97a2284195b685.scope: Deactivated successfully.
Dec 13 07:21:26 compute-0 sudo[126367]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:26 compute-0 sudo[126671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:21:26 compute-0 sudo[126671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:26 compute-0 sudo[126671]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:26 compute-0 sudo[126720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:21:26 compute-0 sudo[126720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:26 compute-0 sudo[126810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhmfkxngducrmovrlgtofmgatelrkebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610486.5194244-50-30005546650790/AnsiballZ_file.py'
Dec 13 07:21:26 compute-0 sudo[126810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.850385567 +0000 UTC m=+0.027884680 container create 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:21:26 compute-0 python3.9[126812]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:26 compute-0 systemd[1]: Started libpod-conmon-833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1.scope.
Dec 13 07:21:26 compute-0 sudo[126810]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.900889248 +0000 UTC m=+0.078388370 container init 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.905800898 +0000 UTC m=+0.083300010 container start 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.907383562 +0000 UTC m=+0.084882694 container attach 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:21:26 compute-0 relaxed_montalcini[126836]: 167 167
Dec 13 07:21:26 compute-0 systemd[1]: libpod-833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1.scope: Deactivated successfully.
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.910210794 +0000 UTC m=+0.087709907 container died 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6373f59514ec3f52ac0d79d8699dfda1c73240468f8ac50378e68cc1684b61f-merged.mount: Deactivated successfully.
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.93334176 +0000 UTC m=+0.110840872 container remove 833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:21:26 compute-0 podman[126823]: 2025-12-13 07:21:26.83935521 +0000 UTC m=+0.016854341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:21:26 compute-0 systemd[1]: libpod-conmon-833ad7c1a28b24191b1c69188123a9dc770ed9c08e47448344a2fc18f2e7d3d1.scope: Deactivated successfully.
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.057761998 +0000 UTC m=+0.031668509 container create 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:21:27 compute-0 systemd[1]: Started libpod-conmon-31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead.scope.
Dec 13 07:21:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.118665221 +0000 UTC m=+0.092571723 container init 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.124075408 +0000 UTC m=+0.097981908 container start 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.125195913 +0000 UTC m=+0.099102404 container attach 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.045025154 +0000 UTC m=+0.018931675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]: {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:     "0": [
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:         {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "devices": [
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "/dev/loop3"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             ],
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_name": "ceph_lv0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_size": "21470642176",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "name": "ceph_lv0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "tags": {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cluster_name": "ceph",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.crush_device_class": "",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.encrypted": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.objectstore": "bluestore",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osd_id": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.type": "block",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.vdo": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.with_tpm": "0"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             },
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "type": "block",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "vg_name": "ceph_vg0"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:         }
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:     ],
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:     "1": [
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:         {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "devices": [
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "/dev/loop4"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             ],
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_name": "ceph_lv1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_size": "21470642176",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "name": "ceph_lv1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "tags": {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cluster_name": "ceph",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.crush_device_class": "",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.encrypted": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.objectstore": "bluestore",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osd_id": "1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.type": "block",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.vdo": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.with_tpm": "0"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             },
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "type": "block",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "vg_name": "ceph_vg1"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:         }
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:     ],
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:     "2": [
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:         {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "devices": [
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "/dev/loop5"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             ],
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_name": "ceph_lv2",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_size": "21470642176",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "name": "ceph_lv2",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "tags": {
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.cluster_name": "ceph",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.crush_device_class": "",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.encrypted": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.objectstore": "bluestore",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osd_id": "2",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.type": "block",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.vdo": "0",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:                 "ceph.with_tpm": "0"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             },
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "type": "block",
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:             "vg_name": "ceph_vg2"
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:         }
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]:     ]
Dec 13 07:21:27 compute-0 hardcore_lumiere[126947]: }
Dec 13 07:21:27 compute-0 systemd[1]: libpod-31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead.scope: Deactivated successfully.
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.370914012 +0000 UTC m=+0.344820513 container died 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:21:27 compute-0 sudo[127029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgzpdjdanobtawqevmmgjmwmbggndaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610487.0310538-65-85864415652591/AnsiballZ_stat.py'
Dec 13 07:21:27 compute-0 sudo[127029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5e686ea281672ceb91a078844a7083cd2bdcad34bdcd16914994213179a1fe9-merged.mount: Deactivated successfully.
Dec 13 07:21:27 compute-0 podman[126890]: 2025-12-13 07:21:27.393606073 +0000 UTC m=+0.367512574 container remove 31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 13 07:21:27 compute-0 systemd[1]: libpod-conmon-31e92762343d4fbcf7e345806620e366df14950259f1b1cae269050f939c0ead.scope: Deactivated successfully.
Dec 13 07:21:27 compute-0 sudo[126720]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:27 compute-0 sudo[127041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:21:27 compute-0 sudo[127041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:27 compute-0 sudo[127041]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:27 compute-0 sudo[127066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:21:27 compute-0 sudo[127066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:27 compute-0 ceph-mon[74928]: pgmap v340: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:27 compute-0 python3.9[127039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:27 compute-0 sudo[127029]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.742866458 +0000 UTC m=+0.030084723 container create 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:21:27 compute-0 systemd[1]: Started libpod-conmon-5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946.scope.
Dec 13 07:21:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.794660113 +0000 UTC m=+0.081878378 container init 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.799920177 +0000 UTC m=+0.087138432 container start 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.801197508 +0000 UTC m=+0.088415763 container attach 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:21:27 compute-0 sleepy_pascal[127161]: 167 167
Dec 13 07:21:27 compute-0 systemd[1]: libpod-5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946.scope: Deactivated successfully.
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.80455638 +0000 UTC m=+0.091774655 container died 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.821251031 +0000 UTC m=+0.108469285 container remove 5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:21:27 compute-0 podman[127148]: 2025-12-13 07:21:27.730823949 +0000 UTC m=+0.018042203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:21:27 compute-0 systemd[1]: libpod-conmon-5e22ec5deda52c7fe6fa044f11e4e10d68e93ea7c462d79300c65b7d43b90946.scope: Deactivated successfully.
Dec 13 07:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f242c5b8f303ae8d7ab0e8a1f726cf43054a56664d6192b8dc14ceab8221d85-merged.mount: Deactivated successfully.
Dec 13 07:21:27 compute-0 sudo[127258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gycyrdbqndtlliwtvbfcivvrmbamqces ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610487.0310538-65-85864415652591/AnsiballZ_copy.py'
Dec 13 07:21:27 compute-0 sudo[127258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:27 compute-0 podman[127252]: 2025-12-13 07:21:27.943421852 +0000 UTC m=+0.029202556 container create 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:21:27 compute-0 systemd[1]: Started libpod-conmon-8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca.scope.
Dec 13 07:21:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:21:28 compute-0 podman[127252]: 2025-12-13 07:21:28.009574689 +0000 UTC m=+0.095355393 container init 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:21:28 compute-0 podman[127252]: 2025-12-13 07:21:28.01433275 +0000 UTC m=+0.100113454 container start 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:21:28 compute-0 podman[127252]: 2025-12-13 07:21:28.015563944 +0000 UTC m=+0.101344648 container attach 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 13 07:21:28 compute-0 podman[127252]: 2025-12-13 07:21:27.931609165 +0000 UTC m=+0.017389879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:21:28 compute-0 python3.9[127266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610487.0310538-65-85864415652591/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=93ffe3a97552f1a61bab50e724ad94c02d337e62 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:28 compute-0 sudo[127258]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:28 compute-0 sudo[127479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moyavjwwfydkxseobeummtbmvycgbmcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610488.207571-65-80476117840165/AnsiballZ_stat.py'
Dec 13 07:21:28 compute-0 sudo[127479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:28 compute-0 python3.9[127484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:28 compute-0 lvm[127499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:21:28 compute-0 lvm[127499]: VG ceph_vg0 finished
Dec 13 07:21:28 compute-0 lvm[127502]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:21:28 compute-0 lvm[127502]: VG ceph_vg1 finished
Dec 13 07:21:28 compute-0 sudo[127479]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:28 compute-0 lvm[127505]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:21:28 compute-0 lvm[127505]: VG ceph_vg2 finished
Dec 13 07:21:28 compute-0 lvm[127506]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:21:28 compute-0 romantic_murdock[127272]: {}
Dec 13 07:21:28 compute-0 lvm[127506]: VG ceph_vg1 finished
Dec 13 07:21:28 compute-0 systemd[1]: libpod-8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca.scope: Deactivated successfully.
Dec 13 07:21:28 compute-0 podman[127252]: 2025-12-13 07:21:28.657720269 +0000 UTC m=+0.743500973 container died 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c92c191c52649bc524365de8a435b5b176d3b4d679e5a41b91b81a945048ab5-merged.mount: Deactivated successfully.
Dec 13 07:21:28 compute-0 podman[127252]: 2025-12-13 07:21:28.680402722 +0000 UTC m=+0.766183427 container remove 8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:21:28 compute-0 systemd[1]: libpod-conmon-8c6a3b4f5421b33956b0fb2100f80aca6e3d5a2cfdb82117888e28bcd805c2ca.scope: Deactivated successfully.
Dec 13 07:21:28 compute-0 sudo[127066]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:21:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:21:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:21:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:21:28 compute-0 sudo[127564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:21:28 compute-0 sudo[127564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:21:28 compute-0 sudo[127564]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:28 compute-0 sudo[127662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbconnwpehvzdbwbqxeocevqmrkfusv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610488.207571-65-80476117840165/AnsiballZ_copy.py'
Dec 13 07:21:28 compute-0 sudo[127662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:29 compute-0 python3.9[127664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610488.207571-65-80476117840165/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=aa3fdf38ba0502d304f7711b68e366c39f369e39 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:29 compute-0 sudo[127662]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:29 compute-0 sudo[127814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymhxuzwoguqukpwjttbccgqxzjdeblcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610489.110907-65-135232022985091/AnsiballZ_stat.py'
Dec 13 07:21:29 compute-0 sudo[127814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:29 compute-0 python3.9[127816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:29 compute-0 sudo[127814]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:29 compute-0 ceph-mon[74928]: pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:21:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:21:29 compute-0 sudo[127937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csrkdajxkggvhqeesfecvymqjaoebwgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610489.110907-65-135232022985091/AnsiballZ_copy.py'
Dec 13 07:21:29 compute-0 sudo[127937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:29 compute-0 python3.9[127939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610489.110907-65-135232022985091/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=53702116d63484fa29e275004baf9ef3d140f885 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:29 compute-0 sudo[127937]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:30 compute-0 sudo[128089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbazsmlshvlnteztnokrfqlhgezbmugy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610490.0038679-109-217930023645824/AnsiballZ_file.py'
Dec 13 07:21:30 compute-0 sudo[128089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:30 compute-0 python3.9[128091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:30 compute-0 sudo[128089]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:30 compute-0 sudo[128241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezmelnoqlgiamppjdkwljosaifqkwgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610490.46742-109-264917702848061/AnsiballZ_file.py'
Dec 13 07:21:30 compute-0 sudo[128241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:30 compute-0 python3.9[128243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:30 compute-0 sudo[128241]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:31 compute-0 sudo[128393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmaznmgaynldodknfiosorfdaxjncom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610490.957922-124-123964135607505/AnsiballZ_stat.py'
Dec 13 07:21:31 compute-0 sudo[128393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:31 compute-0 python3.9[128395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:31 compute-0 sudo[128393]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:31 compute-0 ceph-mon[74928]: pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:31 compute-0 sudo[128516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucoqijeztjxeehpetiqhezlqzjirwpnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610490.957922-124-123964135607505/AnsiballZ_copy.py'
Dec 13 07:21:31 compute-0 sudo[128516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:31 compute-0 python3.9[128518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610490.957922-124-123964135607505/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9b324efd75b06f149665358bc5a26a4d083e28e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:31 compute-0 sudo[128516]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:31 compute-0 sudo[128668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzahftgygqiabzpmnisajrchnnphwhlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610491.7980857-124-196559809791329/AnsiballZ_stat.py'
Dec 13 07:21:31 compute-0 sudo[128668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:32 compute-0 python3.9[128670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:32 compute-0 sudo[128668]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:32 compute-0 sudo[128791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnrdydzoovhcoyyliijsvxgrzsxmhyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610491.7980857-124-196559809791329/AnsiballZ_copy.py'
Dec 13 07:21:32 compute-0 sudo[128791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:32 compute-0 python3.9[128793]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610491.7980857-124-196559809791329/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f836755fbf73bec0ca76e493d55feda43385126d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:32 compute-0 sudo[128791]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:32 compute-0 sudo[128943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejdrmxuvlaplwvkfhdzgczpzosiusqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610492.6338096-124-32039470112960/AnsiballZ_stat.py'
Dec 13 07:21:32 compute-0 sudo[128943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:33 compute-0 python3.9[128945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:33 compute-0 sudo[128943]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:33 compute-0 sudo[129066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdulfnqspaippaigxrlahzbawlynpcst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610492.6338096-124-32039470112960/AnsiballZ_copy.py'
Dec 13 07:21:33 compute-0 sudo[129066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:33 compute-0 python3.9[129068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610492.6338096-124-32039470112960/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=45e7071bcb8a7db3af7217195287d138c3b8e819 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:33 compute-0 sudo[129066]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:33 compute-0 ceph-mon[74928]: pgmap v343: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:33 compute-0 sudo[129218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxxtwuahjocjpqgqvdvqggvqjkvtasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610493.6993952-168-199429876570556/AnsiballZ_file.py'
Dec 13 07:21:33 compute-0 sudo[129218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:34 compute-0 python3.9[129220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:34 compute-0 sudo[129218]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:34 compute-0 sudo[129370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfutwwquuhqvyuafmouldpkswcevicc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610494.1482797-168-197240476519055/AnsiballZ_file.py'
Dec 13 07:21:34 compute-0 sudo[129370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:34 compute-0 python3.9[129372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:34 compute-0 sudo[129370]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:34 compute-0 sudo[129522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrqtpmtlybduwqsvnzqrxsmnewzujuhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610494.6801355-183-255085279252703/AnsiballZ_stat.py'
Dec 13 07:21:34 compute-0 sudo[129522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:35 compute-0 python3.9[129524]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:35 compute-0 sudo[129522]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:35 compute-0 sudo[129645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlsmjufqtxtwmkibqqnshvtkifjjebgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610494.6801355-183-255085279252703/AnsiballZ_copy.py'
Dec 13 07:21:35 compute-0 sudo[129645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:35 compute-0 python3.9[129647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610494.6801355-183-255085279252703/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eed392971785171bf75d6f89c2fe22f844cb5eb7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:35 compute-0 sudo[129645]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:35 compute-0 ceph-mon[74928]: pgmap v344: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:35 compute-0 sudo[129797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslwsjgziirurlivcumduvnohsuhtgbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610495.562342-183-207839471850124/AnsiballZ_stat.py'
Dec 13 07:21:35 compute-0 sudo[129797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:35 compute-0 python3.9[129799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:35 compute-0 sudo[129797]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:36 compute-0 sudo[129920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbylfifaxiwsbglzclkymeiwyerloeyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610495.562342-183-207839471850124/AnsiballZ_copy.py'
Dec 13 07:21:36 compute-0 sudo[129920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:36 compute-0 python3.9[129922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610495.562342-183-207839471850124/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f836755fbf73bec0ca76e493d55feda43385126d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:36 compute-0 sudo[129920]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:36 compute-0 sudo[130072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkmcxsvcberfixyraxhulppaxglydjhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610496.3566833-183-43751927436608/AnsiballZ_stat.py'
Dec 13 07:21:36 compute-0 sudo[130072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:36 compute-0 python3.9[130074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:36 compute-0 sudo[130072]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:36 compute-0 sudo[130195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebhfsbraffncoiozodhxifmerhufapm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610496.3566833-183-43751927436608/AnsiballZ_copy.py'
Dec 13 07:21:36 compute-0 sudo[130195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:37 compute-0 python3.9[130197]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610496.3566833-183-43751927436608/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6e664a5b446396149ea98486bb38e13e356dec0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:37 compute-0 sudo[130195]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:37 compute-0 ceph-mon[74928]: pgmap v345: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:37 compute-0 sudo[130347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxegfdgquaxcflcxgjekplyeuhsfemzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610497.7040694-243-235759288177378/AnsiballZ_file.py'
Dec 13 07:21:37 compute-0 sudo[130347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:38 compute-0 python3.9[130349]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:38 compute-0 sudo[130347]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:21:38
Dec 13 07:21:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:21:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:21:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', 'backups', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes']
Dec 13 07:21:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:21:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:38 compute-0 sudo[130499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yofjgjtmbdzvhfvrihhakuxgsjfosjyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610498.1701996-251-199209639023605/AnsiballZ_stat.py'
Dec 13 07:21:38 compute-0 sudo[130499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:38 compute-0 python3.9[130501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:38 compute-0 sudo[130499]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:38 compute-0 sudo[130622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whrxymksqnhpqvaenrsujpianxtimuaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610498.1701996-251-199209639023605/AnsiballZ_copy.py'
Dec 13 07:21:38 compute-0 sudo[130622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:38 compute-0 python3.9[130624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610498.1701996-251-199209639023605/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:38 compute-0 sudo[130622]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:21:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:21:39 compute-0 sudo[130774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjikjmfnbtcfrxoeadyntmgdhpnefder ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610499.0818942-267-166330864610908/AnsiballZ_file.py'
Dec 13 07:21:39 compute-0 sudo[130774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:39 compute-0 python3.9[130776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:39 compute-0 sudo[130774]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:39 compute-0 ceph-mon[74928]: pgmap v346: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:39 compute-0 sudo[130926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boulefaemvsixvnqeuihdwlbkoakqrsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610499.5764947-275-145744352811170/AnsiballZ_stat.py'
Dec 13 07:21:39 compute-0 sudo[130926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:39 compute-0 python3.9[130928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:39 compute-0 sudo[130926]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:40 compute-0 sudo[131049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsjtaeychvmnknkzzshwxyyryehtjzxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610499.5764947-275-145744352811170/AnsiballZ_copy.py'
Dec 13 07:21:40 compute-0 sudo[131049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:40 compute-0 python3.9[131051]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610499.5764947-275-145744352811170/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:40 compute-0 sudo[131049]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:40 compute-0 sudo[131201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smknpcxwskjygydqhhxbbxywrycrsvbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610500.471642-291-92781835644748/AnsiballZ_file.py'
Dec 13 07:21:40 compute-0 sudo[131201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:40 compute-0 python3.9[131203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:40 compute-0 sudo[131201]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:41 compute-0 sudo[131353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyuzmtnackeubpddcoeizzzhwhnrzrua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610500.9542894-299-26184027675444/AnsiballZ_stat.py'
Dec 13 07:21:41 compute-0 sudo[131353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:41 compute-0 python3.9[131355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:41 compute-0 sudo[131353]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:41 compute-0 ceph-mon[74928]: pgmap v347: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:41 compute-0 sudo[131476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfecmnaqnbmrrayyldeaujtrufnavdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610500.9542894-299-26184027675444/AnsiballZ_copy.py'
Dec 13 07:21:41 compute-0 sudo[131476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:41 compute-0 python3.9[131478]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610500.9542894-299-26184027675444/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:41 compute-0 sudo[131476]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:42 compute-0 sudo[131628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usiqsyhhfnjxxxfeembvauwtouulljnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610501.999686-315-191625323938229/AnsiballZ_file.py'
Dec 13 07:21:42 compute-0 sudo[131628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:42 compute-0 python3.9[131630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:42 compute-0 sudo[131628]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:42 compute-0 sudo[131780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muisofpxkmrpmhfibcafbiscmakrpaxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610502.4866176-323-174055833899131/AnsiballZ_stat.py'
Dec 13 07:21:42 compute-0 sudo[131780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:42 compute-0 python3.9[131782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:42 compute-0 sudo[131780]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:43 compute-0 sudo[131903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntwrnpfvrmfrllesnesraqaaayfxewmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610502.4866176-323-174055833899131/AnsiballZ_copy.py'
Dec 13 07:21:43 compute-0 sudo[131903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:43 compute-0 python3.9[131905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610502.4866176-323-174055833899131/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:43 compute-0 sudo[131903]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:43 compute-0 ceph-mon[74928]: pgmap v348: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:43 compute-0 sudo[132055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqxlugetprrlzwjyvnlfzbmmodosqij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610503.4068198-339-123872394839642/AnsiballZ_file.py'
Dec 13 07:21:43 compute-0 sudo[132055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:43 compute-0 python3.9[132057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:43 compute-0 sudo[132055]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:44 compute-0 sudo[132207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzqlyhrxqopietkaoklwqdcpsjhwjjmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610503.8916225-347-121092192754256/AnsiballZ_stat.py'
Dec 13 07:21:44 compute-0 sudo[132207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:44 compute-0 python3.9[132209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:44 compute-0 sudo[132207]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:44 compute-0 sudo[132330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcievbduxrvfjjwudlklomddhxwjrru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610503.8916225-347-121092192754256/AnsiballZ_copy.py'
Dec 13 07:21:44 compute-0 sudo[132330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:44 compute-0 python3.9[132332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610503.8916225-347-121092192754256/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:44 compute-0 sudo[132330]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:44 compute-0 sudo[132482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfqccnohfqdibajltdllbmuxojdrmlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610504.814504-363-262997966218346/AnsiballZ_file.py'
Dec 13 07:21:44 compute-0 sudo[132482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:45 compute-0 python3.9[132484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:21:45 compute-0 sudo[132482]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:45 compute-0 sudo[132634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvbfjwlixfwxdasnyoebjzpjczmumpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610505.2772593-371-218379423811047/AnsiballZ_stat.py'
Dec 13 07:21:45 compute-0 sudo[132634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:45 compute-0 ceph-mon[74928]: pgmap v349: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:45 compute-0 python3.9[132636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:45 compute-0 sudo[132634]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:45 compute-0 sudo[132757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyojhjybsszmdjmdxpnjupaadauhugsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610505.2772593-371-218379423811047/AnsiballZ_copy.py'
Dec 13 07:21:45 compute-0 sudo[132757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:45 compute-0 python3.9[132759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610505.2772593-371-218379423811047/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac04d306f192c0875048c78c53711957498c3ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:46 compute-0 sudo[132757]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:46 compute-0 sshd-session[126083]: Connection closed by 192.168.122.30 port 48142
Dec 13 07:21:46 compute-0 sshd-session[126080]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:21:46 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 13 07:21:46 compute-0 systemd[1]: session-46.scope: Consumed 15.980s CPU time.
Dec 13 07:21:46 compute-0 systemd-logind[745]: Session 46 logged out. Waiting for processes to exit.
Dec 13 07:21:46 compute-0 systemd-logind[745]: Removed session 46.
Dec 13 07:21:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:47 compute-0 ceph-mon[74928]: pgmap v350: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:21:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:21:49 compute-0 ceph-mon[74928]: pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:51 compute-0 ceph-mon[74928]: pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:51 compute-0 sshd-session[132784]: Accepted publickey for zuul from 192.168.122.30 port 49484 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:21:51 compute-0 systemd-logind[745]: New session 47 of user zuul.
Dec 13 07:21:51 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec 13 07:21:51 compute-0 sshd-session[132784]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:21:52 compute-0 sudo[132937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bolexpfvdyqtbmdhjxhgoxcokfumscwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610511.798133-22-65107324922035/AnsiballZ_file.py'
Dec 13 07:21:52 compute-0 sudo[132937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:52 compute-0 python3.9[132939]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:52 compute-0 sudo[132937]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:52 compute-0 sudo[133089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qopupivgajiiluxhaahlbhtlhpkpjbxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610512.47459-34-206133187231405/AnsiballZ_stat.py'
Dec 13 07:21:52 compute-0 sudo[133089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:52 compute-0 python3.9[133091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:52 compute-0 sudo[133089]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:53 compute-0 sudo[133212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxgtjpfxvyixdyrbweuqwabvdtlnymwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610512.47459-34-206133187231405/AnsiballZ_copy.py'
Dec 13 07:21:53 compute-0 sudo[133212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:53 compute-0 python3.9[133214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610512.47459-34-206133187231405/.source.conf _original_basename=ceph.conf follow=False checksum=f9f4c7f65fdb2c19267612cdcf348da04cb4206e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:53 compute-0 sudo[133212]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:53 compute-0 ceph-mon[74928]: pgmap v353: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:53 compute-0 sudo[133364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhpimydygmpvirrvvrfkoebjpefbevpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610513.5907702-34-275441815188965/AnsiballZ_stat.py'
Dec 13 07:21:53 compute-0 sudo[133364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:53 compute-0 python3.9[133366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:21:53 compute-0 sudo[133364]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:54 compute-0 sudo[133487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvabhweistkbyzuegfgzdvegdmzcyllc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610513.5907702-34-275441815188965/AnsiballZ_copy.py'
Dec 13 07:21:54 compute-0 sudo[133487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:21:54 compute-0 python3.9[133489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610513.5907702-34-275441815188965/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=89bb88aee4825eacb5f29faabebd795dc909bcd4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:21:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:54 compute-0 sudo[133487]: pam_unix(sudo:session): session closed for user root
Dec 13 07:21:54 compute-0 sshd-session[132787]: Connection closed by 192.168.122.30 port 49484
Dec 13 07:21:54 compute-0 sshd-session[132784]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:21:54 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec 13 07:21:54 compute-0 systemd[1]: session-47.scope: Consumed 1.914s CPU time.
Dec 13 07:21:54 compute-0 systemd-logind[745]: Session 47 logged out. Waiting for processes to exit.
Dec 13 07:21:54 compute-0 systemd-logind[745]: Removed session 47.
Dec 13 07:21:55 compute-0 ceph-mon[74928]: pgmap v354: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:57 compute-0 ceph-mon[74928]: pgmap v355: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:21:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:59 compute-0 ceph-mon[74928]: pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:21:59 compute-0 sshd-session[133514]: Accepted publickey for zuul from 192.168.122.30 port 55324 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:21:59 compute-0 systemd-logind[745]: New session 48 of user zuul.
Dec 13 07:21:59 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 13 07:21:59 compute-0 sshd-session[133514]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:22:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:00 compute-0 python3.9[133667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:22:01 compute-0 sudo[133821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmlnulcrntqzzzduysoaugshrgdxchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610520.8569374-34-250388065934329/AnsiballZ_file.py'
Dec 13 07:22:01 compute-0 sudo[133821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:01 compute-0 python3.9[133823]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:01 compute-0 sudo[133821]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:01 compute-0 sudo[133973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabsdpgsbgzovdcstttadecejtjcnhvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610521.4113111-34-241215874740204/AnsiballZ_file.py'
Dec 13 07:22:01 compute-0 sudo[133973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:01 compute-0 ceph-mon[74928]: pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:01 compute-0 python3.9[133975]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:01 compute-0 sudo[133973]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:02 compute-0 python3.9[134125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:22:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:02 compute-0 sudo[134275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzcdlpdmbcjfcsxgzqinrpufbhmsotou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610522.4062018-57-271432687860634/AnsiballZ_seboolean.py'
Dec 13 07:22:02 compute-0 sudo[134275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:02 compute-0 python3.9[134277]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 07:22:03 compute-0 ceph-mon[74928]: pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:03 compute-0 sudo[134275]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:04 compute-0 sudo[134431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjzyjnclwxqkgflcydgcifckpbtcemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610523.8232772-67-251300931507946/AnsiballZ_setup.py'
Dec 13 07:22:04 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 13 07:22:04 compute-0 sudo[134431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:04 compute-0 python3.9[134433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:22:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:04 compute-0 sudo[134431]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:04 compute-0 sudo[134515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjiychlyzxzdwgddulxijofmournisa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610523.8232772-67-251300931507946/AnsiballZ_dnf.py'
Dec 13 07:22:04 compute-0 sudo[134515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:05 compute-0 python3.9[134517]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:22:05 compute-0 ceph-mon[74928]: pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:05 compute-0 sudo[134515]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:06 compute-0 sudo[134668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwjmzsansdmhbzswyotdhjcboebtfcds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610526.1242044-79-192522583981141/AnsiballZ_systemd.py'
Dec 13 07:22:06 compute-0 sudo[134668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:06 compute-0 python3.9[134670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:22:06 compute-0 sudo[134668]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:07 compute-0 sudo[134823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwrqdflkycxnlpleztzwaxsvphuxnrv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610527.006577-87-177142331843841/AnsiballZ_edpm_nftables_snippet.py'
Dec 13 07:22:07 compute-0 sudo[134823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:07 compute-0 python3[134825]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 13 07:22:07 compute-0 sudo[134823]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:07 compute-0 ceph-mon[74928]: pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:07 compute-0 sudo[134975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duzmhdblmtxjswvfmcvcmrdntjconznv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610527.6769888-96-27239442404870/AnsiballZ_file.py'
Dec 13 07:22:07 compute-0 sudo[134975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:08 compute-0 python3.9[134977]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:08 compute-0 sudo[134975]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:08 compute-0 sudo[135127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahkpndzmefmqzrqjphlghjuxxhnmfzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610528.1437356-104-91273050123365/AnsiballZ_stat.py'
Dec 13 07:22:08 compute-0 sudo[135127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:08 compute-0 python3.9[135129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:08 compute-0 sudo[135127]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:08 compute-0 sudo[135205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rebberzgcrbvkaeybodsoyrgtiljjbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610528.1437356-104-91273050123365/AnsiballZ_file.py'
Dec 13 07:22:08 compute-0 sudo[135205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:08 compute-0 python3.9[135207]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:08 compute-0 sudo[135205]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:22:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:22:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:22:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:22:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:22:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:22:09 compute-0 sudo[135357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqnfvxcklltniotyfvoodulckyxkkwdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610529.0485232-116-46509993651012/AnsiballZ_stat.py'
Dec 13 07:22:09 compute-0 sudo[135357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:09 compute-0 python3.9[135359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:09 compute-0 sudo[135357]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:09 compute-0 sudo[135435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naoeylkdjoxfgedxmspwchyqkezgkmgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610529.0485232-116-46509993651012/AnsiballZ_file.py'
Dec 13 07:22:09 compute-0 sudo[135435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:09 compute-0 ceph-mon[74928]: pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:09 compute-0 python3.9[135437]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lzwq0wod recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:09 compute-0 sudo[135435]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:10 compute-0 sudo[135587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmmrenvbqgomgxfzpssrlozuoepzlwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610529.8621612-128-208926640430264/AnsiballZ_stat.py'
Dec 13 07:22:10 compute-0 sudo[135587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:10 compute-0 python3.9[135589]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:10 compute-0 sudo[135587]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:10 compute-0 sudo[135665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajxpoyrxdtydikbteijjjcjxfiavymjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610529.8621612-128-208926640430264/AnsiballZ_file.py'
Dec 13 07:22:10 compute-0 sudo[135665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:10 compute-0 python3.9[135667]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:10 compute-0 sudo[135665]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:10 compute-0 sudo[135817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skdrhzbndqskmpjchqtycehtllldtpis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610530.685223-141-142610839967964/AnsiballZ_command.py'
Dec 13 07:22:10 compute-0 sudo[135817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:11 compute-0 python3.9[135819]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:11 compute-0 sudo[135817]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:11 compute-0 sudo[135970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqkqoloxckmhyghocczwhcxbmivnfbi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610531.266638-149-210937089307076/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 07:22:11 compute-0 sudo[135970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:11 compute-0 ceph-mon[74928]: pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:11 compute-0 python3[135972]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 07:22:11 compute-0 sudo[135970]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:12 compute-0 sudo[136122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiyoevqowfffoiipfvnoauutbznkhici ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610531.8916469-157-33462308288053/AnsiballZ_stat.py'
Dec 13 07:22:12 compute-0 sudo[136122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:12 compute-0 python3.9[136124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:12 compute-0 sudo[136122]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:12 compute-0 sudo[136247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iieetyawixjrnybxqfmqsztnsubfmghk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610531.8916469-157-33462308288053/AnsiballZ_copy.py'
Dec 13 07:22:12 compute-0 sudo[136247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:12 compute-0 python3.9[136249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610531.8916469-157-33462308288053/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:12 compute-0 sudo[136247]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:13 compute-0 sudo[136399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wozkayusufcnjdbzbiyyuockawjcifnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610533.0020528-172-113769241810849/AnsiballZ_stat.py'
Dec 13 07:22:13 compute-0 sudo[136399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:13 compute-0 python3.9[136401]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:13 compute-0 sudo[136399]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:13 compute-0 ceph-mon[74928]: pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:13 compute-0 sudo[136524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwszhmyvpwukikksollpdcwnfomezpth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610533.0020528-172-113769241810849/AnsiballZ_copy.py'
Dec 13 07:22:13 compute-0 sudo[136524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:13 compute-0 python3.9[136526]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610533.0020528-172-113769241810849/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:13 compute-0 sudo[136524]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:14 compute-0 sudo[136676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmxmybtlcwbthefzkvqsqnkwqbakqump ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610533.933562-187-192852118733503/AnsiballZ_stat.py'
Dec 13 07:22:14 compute-0 sudo[136676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:14 compute-0 python3.9[136678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:14 compute-0 sudo[136676]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:14 compute-0 sudo[136801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvcrxycxbznkbsvpmgaujusatdshepio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610533.933562-187-192852118733503/AnsiballZ_copy.py'
Dec 13 07:22:14 compute-0 sudo[136801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:14 compute-0 python3.9[136803]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610533.933562-187-192852118733503/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:14 compute-0 sudo[136801]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:15 compute-0 sudo[136953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwaapnhmqbgkjricomhhrahdcwrbxwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610534.8378847-202-266397596612345/AnsiballZ_stat.py'
Dec 13 07:22:15 compute-0 sudo[136953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:15 compute-0 python3.9[136955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:15 compute-0 sudo[136953]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:15 compute-0 ceph-mon[74928]: pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:15 compute-0 sudo[137078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbiweyifygfdfyohfjfuztjunxxawsat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610534.8378847-202-266397596612345/AnsiballZ_copy.py'
Dec 13 07:22:15 compute-0 sudo[137078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:15 compute-0 python3.9[137080]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610534.8378847-202-266397596612345/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:15 compute-0 sudo[137078]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:16 compute-0 sudo[137230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhcnkkdycoaahhhxjjspkdulduyalllr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610535.9171937-217-212678752126689/AnsiballZ_stat.py'
Dec 13 07:22:16 compute-0 sudo[137230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:16 compute-0 python3.9[137232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:16 compute-0 sudo[137230]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:16 compute-0 sudo[137355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyezbnxwdcxbmjkbemzyabitjznmyaro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610535.9171937-217-212678752126689/AnsiballZ_copy.py'
Dec 13 07:22:16 compute-0 sudo[137355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:16 compute-0 python3.9[137357]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610535.9171937-217-212678752126689/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:16 compute-0 sudo[137355]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:17 compute-0 sudo[137507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqrtoaehxtuuaucttxlnbumohjfqukrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610536.8833914-232-119547735976289/AnsiballZ_file.py'
Dec 13 07:22:17 compute-0 sudo[137507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:17 compute-0 python3.9[137509]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:17 compute-0 sudo[137507]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:17 compute-0 sudo[137659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgvqtfwiijxhiyzraqlezcshchcgypqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610537.34955-240-15620809599579/AnsiballZ_command.py'
Dec 13 07:22:17 compute-0 sudo[137659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:17 compute-0 ceph-mon[74928]: pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:17 compute-0 python3.9[137661]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:17 compute-0 sudo[137659]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:18 compute-0 sudo[137814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanhhzndrceclbebxxtkafwecmoqrxsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610537.8389912-248-254449963279580/AnsiballZ_blockinfile.py'
Dec 13 07:22:18 compute-0 sudo[137814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:18 compute-0 python3.9[137816]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:18 compute-0 sudo[137814]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:18 compute-0 sudo[137966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irpvmkkyrhusduesauurzfmemblpboqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610538.4818535-257-230348259571503/AnsiballZ_command.py'
Dec 13 07:22:18 compute-0 sudo[137966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:18 compute-0 python3.9[137968]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:18 compute-0 sudo[137966]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:19 compute-0 sudo[138119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjcrgvvxeqfyiiuqkwqerortvbbislpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610538.9893801-265-132647595726319/AnsiballZ_stat.py'
Dec 13 07:22:19 compute-0 sudo[138119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:19 compute-0 python3.9[138121]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:22:19 compute-0 sudo[138119]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:19 compute-0 ceph-mon[74928]: pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:19 compute-0 sudo[138273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnssrkxogrrbfawlfahroifafqilvjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610539.552171-273-85236337004476/AnsiballZ_command.py'
Dec 13 07:22:19 compute-0 sudo[138273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:19 compute-0 python3.9[138275]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:19 compute-0 sudo[138273]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:20 compute-0 sudo[138428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobmxhzreixtiehzhudwlfkowcsufpnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610540.1067917-281-147286159300939/AnsiballZ_file.py'
Dec 13 07:22:20 compute-0 sudo[138428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:20 compute-0 python3.9[138430]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:20 compute-0 sudo[138428]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:21 compute-0 python3.9[138580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:22:21 compute-0 ceph-mon[74928]: pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:21 compute-0 sudo[138731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pahcevkrftsampyambovrjxkdiwmokea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610541.8018155-321-152177321099847/AnsiballZ_command.py'
Dec 13 07:22:21 compute-0 sudo[138731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:22 compute-0 python3.9[138733]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:cb:58:d7:dd" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:22 compute-0 ovs-vsctl[138734]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:cb:58:d7:dd external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 13 07:22:22 compute-0 sudo[138731]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:22 compute-0 sudo[138884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocngvevkxiwegelhrwmzbqaexhzullw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610542.2951806-330-83149921830107/AnsiballZ_command.py'
Dec 13 07:22:22 compute-0 sudo[138884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:22 compute-0 python3.9[138886]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:22 compute-0 sudo[138884]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:22 compute-0 sudo[139039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnuggjjhapfpwyxblkgbauaofwmwctbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610542.7786226-338-117675603741565/AnsiballZ_command.py'
Dec 13 07:22:22 compute-0 sudo[139039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:23 compute-0 python3.9[139041]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:23 compute-0 ovs-vsctl[139042]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 13 07:22:23 compute-0 sudo[139039]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:23 compute-0 python3.9[139192]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:22:23 compute-0 ceph-mon[74928]: pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:24 compute-0 sudo[139344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtafacvgokxzgltvsrniwsxstqdiqsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610543.7625928-355-127255353512981/AnsiballZ_file.py'
Dec 13 07:22:24 compute-0 sudo[139344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:24 compute-0 python3.9[139346]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:24 compute-0 sudo[139344]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:24 compute-0 sudo[139496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sotbkjaivpqijxzofxjjzhpzzeyazzqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610544.3754833-363-123275250491343/AnsiballZ_stat.py'
Dec 13 07:22:24 compute-0 sudo[139496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:24 compute-0 python3.9[139498]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:24 compute-0 sudo[139496]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:24 compute-0 sudo[139574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-merjyuakfggcbluuhwhtpvyodpzrbtio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610544.3754833-363-123275250491343/AnsiballZ_file.py'
Dec 13 07:22:24 compute-0 sudo[139574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:25 compute-0 python3.9[139576]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:25 compute-0 sudo[139574]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:25 compute-0 sudo[139726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdulclzwtwrsidgrbrdktpqnnzfxyfxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610545.1984181-363-56680213075333/AnsiballZ_stat.py'
Dec 13 07:22:25 compute-0 sudo[139726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:25 compute-0 python3.9[139728]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:25 compute-0 sudo[139726]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:25 compute-0 ceph-mon[74928]: pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:25 compute-0 sudo[139804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzjbxnuhztnvzyglfmreliguxsdxmgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610545.1984181-363-56680213075333/AnsiballZ_file.py'
Dec 13 07:22:25 compute-0 sudo[139804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:25 compute-0 python3.9[139806]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:25 compute-0 sudo[139804]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:26 compute-0 sudo[139956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvkpuobwoefxtsizltoszjpakendwbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610546.0383954-386-100199895503633/AnsiballZ_file.py'
Dec 13 07:22:26 compute-0 sudo[139956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:26 compute-0 python3.9[139958]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:26 compute-0 sudo[139956]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:26 compute-0 sudo[140108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omdoasqekuoerykytfgdkchefjcnxhhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610546.5498207-394-234219627624785/AnsiballZ_stat.py'
Dec 13 07:22:26 compute-0 sudo[140108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:26 compute-0 python3.9[140110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:26 compute-0 sudo[140108]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:27 compute-0 sudo[140186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlrkgfjsahijmjvvwxaxbhxxzbdggqzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610546.5498207-394-234219627624785/AnsiballZ_file.py'
Dec 13 07:22:27 compute-0 sudo[140186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:27 compute-0 python3.9[140188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:27 compute-0 sudo[140186]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:27 compute-0 sudo[140338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqpntbhuhryhgffxhubervoqyvqitvzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610547.424364-406-185920295897533/AnsiballZ_stat.py'
Dec 13 07:22:27 compute-0 sudo[140338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:27 compute-0 ceph-mon[74928]: pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.744809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547744935, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1442, "num_deletes": 250, "total_data_size": 2249453, "memory_usage": 2286904, "flush_reason": "Manual Compaction"}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547751478, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1299021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7381, "largest_seqno": 8822, "table_properties": {"data_size": 1294085, "index_size": 2204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12798, "raw_average_key_size": 19, "raw_value_size": 1283100, "raw_average_value_size": 1998, "num_data_blocks": 104, "num_entries": 642, "num_filter_entries": 642, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610397, "oldest_key_time": 1765610397, "file_creation_time": 1765610547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6729 microseconds, and 5585 cpu microseconds.
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.751565) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1299021 bytes OK
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.751613) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752041) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752053) EVENT_LOG_v1 {"time_micros": 1765610547752050, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752076) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2243069, prev total WAL file size 2243069, number of live WAL files 2.
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752912) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1268KB)], [20(7551KB)]
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547752993, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9031984, "oldest_snapshot_seqno": -1}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3317 keys, 6892166 bytes, temperature: kUnknown
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547769544, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6892166, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6867192, "index_size": 15585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 79564, "raw_average_key_size": 23, "raw_value_size": 6804386, "raw_average_value_size": 2051, "num_data_blocks": 691, "num_entries": 3317, "num_filter_entries": 3317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.769736) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6892166 bytes
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.770147) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 544.2 rd, 415.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.4 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.3) write-amplify(5.3) OK, records in: 3760, records dropped: 443 output_compression: NoCompression
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.770165) EVENT_LOG_v1 {"time_micros": 1765610547770155, "job": 6, "event": "compaction_finished", "compaction_time_micros": 16596, "compaction_time_cpu_micros": 13938, "output_level": 6, "num_output_files": 1, "total_output_size": 6892166, "num_input_records": 3760, "num_output_records": 3317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547770461, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610547771283, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.752808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:22:27 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:22:27.771330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:22:27 compute-0 python3.9[140340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:27 compute-0 sudo[140338]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:27 compute-0 sudo[140416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmihnyvehmifscswqsjalxiacgqigkec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610547.424364-406-185920295897533/AnsiballZ_file.py'
Dec 13 07:22:27 compute-0 sudo[140416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:28 compute-0 python3.9[140418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:28 compute-0 sudo[140416]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:28 compute-0 sudo[140568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhbqnjaomqsaarknrjuzjilnjyhvtnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610548.2985501-418-273794009632872/AnsiballZ_systemd.py'
Dec 13 07:22:28 compute-0 sudo[140568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:28 compute-0 ceph-mon[74928]: pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:28 compute-0 sudo[140571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:22:28 compute-0 sudo[140571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:28 compute-0 sudo[140571]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:28 compute-0 sudo[140596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:22:28 compute-0 sudo[140596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:28 compute-0 python3.9[140570]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:22:28 compute-0 systemd[1]: Reloading.
Dec 13 07:22:29 compute-0 systemd-rc-local-generator[140639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:22:29 compute-0 systemd-sysv-generator[140646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:22:29 compute-0 sudo[140568]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:29 compute-0 sudo[140596]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:22:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:22:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:22:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:22:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:22:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:22:29 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:22:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:22:29 compute-0 sudo[140709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:22:29 compute-0 sudo[140709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:29 compute-0 sudo[140709]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:29 compute-0 sudo[140761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:22:29 compute-0 sudo[140761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:29 compute-0 sudo[140884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edipcrttxyuiviygeztmjlekjdfydslr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610549.3389637-426-234315600118693/AnsiballZ_stat.py'
Dec 13 07:22:29 compute-0 sudo[140884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.641956718 +0000 UTC m=+0.028716770 container create 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:22:29 compute-0 systemd[1]: Started libpod-conmon-529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5.scope.
Dec 13 07:22:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:29 compute-0 python3.9[140886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.705621196 +0000 UTC m=+0.092381268 container init 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.712594249 +0000 UTC m=+0.099354311 container start 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.713878982 +0000 UTC m=+0.100639034 container attach 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:22:29 compute-0 pensive_leavitt[140911]: 167 167
Dec 13 07:22:29 compute-0 systemd[1]: libpod-529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5.scope: Deactivated successfully.
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.719925866 +0000 UTC m=+0.106686068 container died 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.63016627 +0000 UTC m=+0.016926332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f7b694e735924e00b219a792064c0b43bb18e6db39d011ac23dd82162e84a2-merged.mount: Deactivated successfully.
Dec 13 07:22:29 compute-0 podman[140898]: 2025-12-13 07:22:29.743940506 +0000 UTC m=+0.130700559 container remove 529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_leavitt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:22:29 compute-0 sudo[140884]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:22:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:22:29 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:22:29 compute-0 systemd[1]: libpod-conmon-529a2aed9e55cc78e952676bd272abf013507d2f492b9865b8009031d93ecbe5.scope: Deactivated successfully.
Dec 13 07:22:29 compute-0 podman[140958]: 2025-12-13 07:22:29.876126303 +0000 UTC m=+0.034805152 container create 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:22:29 compute-0 systemd[1]: Started libpod-conmon-90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c.scope.
Dec 13 07:22:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:29 compute-0 sudo[141024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmfjtvvxwvrvqtmkwhchbecgojqjuzca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610549.3389637-426-234315600118693/AnsiballZ_file.py'
Dec 13 07:22:29 compute-0 sudo[141024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:29 compute-0 podman[140958]: 2025-12-13 07:22:29.939840353 +0000 UTC m=+0.098519203 container init 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:22:29 compute-0 podman[140958]: 2025-12-13 07:22:29.945593486 +0000 UTC m=+0.104272335 container start 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:22:29 compute-0 podman[140958]: 2025-12-13 07:22:29.946698842 +0000 UTC m=+0.105377681 container attach 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:22:29 compute-0 podman[140958]: 2025-12-13 07:22:29.862891804 +0000 UTC m=+0.021570672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:22:30 compute-0 python3.9[141026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:30 compute-0 sudo[141024]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:30 compute-0 pedantic_elgamal[141015]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:22:30 compute-0 pedantic_elgamal[141015]: --> All data devices are unavailable
Dec 13 07:22:30 compute-0 systemd[1]: libpod-90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c.scope: Deactivated successfully.
Dec 13 07:22:30 compute-0 podman[140958]: 2025-12-13 07:22:30.327556802 +0000 UTC m=+0.486235641 container died 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:22:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc4687a1bfd69030ed94daa2f5d58c1f1647b7f21b2248fbabdb484d36c69e02-merged.mount: Deactivated successfully.
Dec 13 07:22:30 compute-0 podman[140958]: 2025-12-13 07:22:30.354997226 +0000 UTC m=+0.513676075 container remove 90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:22:30 compute-0 systemd[1]: libpod-conmon-90a2cd0f572960ab4503334a7c06179bfbdb3d486b68f7304ebaf3c243ab7c5c.scope: Deactivated successfully.
Dec 13 07:22:30 compute-0 sudo[140761]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:30 compute-0 sudo[141152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:22:30 compute-0 sudo[141152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:30 compute-0 sudo[141152]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:30 compute-0 sudo[141201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:22:30 compute-0 sudo[141201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:30 compute-0 sudo[141251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpzbnnvxkbcjtmykccwohramhuiszrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610550.2698002-438-120328274760546/AnsiballZ_stat.py'
Dec 13 07:22:30 compute-0 sudo[141251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:30 compute-0 python3.9[141254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:30 compute-0 sudo[141251]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.745060712 +0000 UTC m=+0.026947959 container create ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:22:30 compute-0 ceph-mon[74928]: pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:30 compute-0 systemd[1]: Started libpod-conmon-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope.
Dec 13 07:22:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.80530075 +0000 UTC m=+0.087188018 container init ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.810343469 +0000 UTC m=+0.092230717 container start ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.811471115 +0000 UTC m=+0.093358354 container attach ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:22:30 compute-0 sleepy_visvesvaraya[141303]: 167 167
Dec 13 07:22:30 compute-0 systemd[1]: libpod-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope: Deactivated successfully.
Dec 13 07:22:30 compute-0 conmon[141303]: conmon ebb70961688b1c9eab28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope/container/memory.events
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.814174853 +0000 UTC m=+0.096062101 container died ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.73322074 +0000 UTC m=+0.015108008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0e503ffcfaafacfc947c6fd599ec32a5d67a36d332a9d36992be412cd39345a-merged.mount: Deactivated successfully.
Dec 13 07:22:30 compute-0 podman[141267]: 2025-12-13 07:22:30.840122472 +0000 UTC m=+0.122009720 container remove ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_visvesvaraya, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:22:30 compute-0 systemd[1]: libpod-conmon-ebb70961688b1c9eab28abe7281879474033066c52ed44a56418eb836508a899.scope: Deactivated successfully.
Dec 13 07:22:30 compute-0 sudo[141369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xheopeqsqqcbfxmjoanoamyabuhmbxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610550.2698002-438-120328274760546/AnsiballZ_file.py'
Dec 13 07:22:30 compute-0 sudo[141369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:30 compute-0 podman[141377]: 2025-12-13 07:22:30.968114174 +0000 UTC m=+0.033625377 container create bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:22:30 compute-0 systemd[1]: Started libpod-conmon-bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c.scope.
Dec 13 07:22:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:31 compute-0 podman[141377]: 2025-12-13 07:22:31.033861523 +0000 UTC m=+0.099372747 container init bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:22:31 compute-0 podman[141377]: 2025-12-13 07:22:31.039674007 +0000 UTC m=+0.105185209 container start bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:22:31 compute-0 podman[141377]: 2025-12-13 07:22:31.041022278 +0000 UTC m=+0.106533481 container attach bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:22:31 compute-0 podman[141377]: 2025-12-13 07:22:30.955029826 +0000 UTC m=+0.020541049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:22:31 compute-0 python3.9[141371]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:31 compute-0 sudo[141369]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:31 compute-0 nice_herschel[141390]: {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:     "0": [
Dec 13 07:22:31 compute-0 nice_herschel[141390]:         {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "devices": [
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "/dev/loop3"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             ],
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_name": "ceph_lv0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_size": "21470642176",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "name": "ceph_lv0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "tags": {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cluster_name": "ceph",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.crush_device_class": "",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.encrypted": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.objectstore": "bluestore",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osd_id": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.type": "block",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.vdo": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.with_tpm": "0"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             },
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "type": "block",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "vg_name": "ceph_vg0"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:         }
Dec 13 07:22:31 compute-0 nice_herschel[141390]:     ],
Dec 13 07:22:31 compute-0 nice_herschel[141390]:     "1": [
Dec 13 07:22:31 compute-0 nice_herschel[141390]:         {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "devices": [
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "/dev/loop4"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             ],
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_name": "ceph_lv1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_size": "21470642176",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "name": "ceph_lv1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "tags": {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cluster_name": "ceph",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.crush_device_class": "",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.encrypted": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.objectstore": "bluestore",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osd_id": "1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.type": "block",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.vdo": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.with_tpm": "0"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             },
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "type": "block",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "vg_name": "ceph_vg1"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:         }
Dec 13 07:22:31 compute-0 nice_herschel[141390]:     ],
Dec 13 07:22:31 compute-0 nice_herschel[141390]:     "2": [
Dec 13 07:22:31 compute-0 nice_herschel[141390]:         {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "devices": [
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "/dev/loop5"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             ],
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_name": "ceph_lv2",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_size": "21470642176",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "name": "ceph_lv2",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "tags": {
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.cluster_name": "ceph",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.crush_device_class": "",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.encrypted": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.objectstore": "bluestore",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osd_id": "2",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.type": "block",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.vdo": "0",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:                 "ceph.with_tpm": "0"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             },
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "type": "block",
Dec 13 07:22:31 compute-0 nice_herschel[141390]:             "vg_name": "ceph_vg2"
Dec 13 07:22:31 compute-0 nice_herschel[141390]:         }
Dec 13 07:22:31 compute-0 nice_herschel[141390]:     ]
Dec 13 07:22:31 compute-0 nice_herschel[141390]: }
Dec 13 07:22:31 compute-0 systemd[1]: libpod-bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c.scope: Deactivated successfully.
Dec 13 07:22:31 compute-0 podman[141377]: 2025-12-13 07:22:31.303692126 +0000 UTC m=+0.369203329 container died bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 07:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a204b34ecab8bcc113462c30dce111de6be6481071ef7e8cdb437dc0879475a-merged.mount: Deactivated successfully.
Dec 13 07:22:31 compute-0 podman[141377]: 2025-12-13 07:22:31.331913395 +0000 UTC m=+0.397424598 container remove bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_herschel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:22:31 compute-0 systemd[1]: libpod-conmon-bc680d267c05caca87c481d192e0aa7e849e71636a05a8fc0588e7d7f90b878c.scope: Deactivated successfully.
Dec 13 07:22:31 compute-0 sudo[141201]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:31 compute-0 sudo[141532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:22:31 compute-0 sudo[141532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:31 compute-0 sudo[141532]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:31 compute-0 sudo[141582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsabuffiadzmxcngicwuopozbgtmtpvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610551.2112832-450-158432571363653/AnsiballZ_systemd.py'
Dec 13 07:22:31 compute-0 sudo[141582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:31 compute-0 sudo[141585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:22:31 compute-0 sudo[141585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:31 compute-0 podman[141620]: 2025-12-13 07:22:31.697841644 +0000 UTC m=+0.029106382 container create 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 13 07:22:31 compute-0 python3.9[141586]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:22:31 compute-0 systemd[1]: Reloading.
Dec 13 07:22:31 compute-0 systemd-sysv-generator[141657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:22:31 compute-0 podman[141620]: 2025-12-13 07:22:31.685825963 +0000 UTC m=+0.017090721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:22:31 compute-0 systemd-rc-local-generator[141653]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:22:31 compute-0 systemd[1]: Started libpod-conmon-3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018.scope.
Dec 13 07:22:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:31 compute-0 podman[141620]: 2025-12-13 07:22:31.970180807 +0000 UTC m=+0.301445555 container init 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:22:31 compute-0 podman[141620]: 2025-12-13 07:22:31.976486847 +0000 UTC m=+0.307751585 container start 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:22:31 compute-0 lucid_benz[141670]: 167 167
Dec 13 07:22:31 compute-0 podman[141620]: 2025-12-13 07:22:31.981097895 +0000 UTC m=+0.312362643 container attach 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:22:31 compute-0 systemd[1]: libpod-3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018.scope: Deactivated successfully.
Dec 13 07:22:31 compute-0 podman[141620]: 2025-12-13 07:22:31.981840899 +0000 UTC m=+0.313105638 container died 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:22:31 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 07:22:31 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 07:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-980b0a69c0866cc6de50bc52d23e85f14c5c09ebcac4368e46d44ea12be00b10-merged.mount: Deactivated successfully.
Dec 13 07:22:32 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 07:22:32 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 07:22:32 compute-0 podman[141620]: 2025-12-13 07:22:32.008069557 +0000 UTC m=+0.339334296 container remove 3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_benz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:22:32 compute-0 systemd[1]: libpod-conmon-3f71edb44a600277a78cb5b031b4b3ab80b64e7f823c32840ba3311480752018.scope: Deactivated successfully.
Dec 13 07:22:32 compute-0 sudo[141582]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.140921585 +0000 UTC m=+0.031874359 container create 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:22:32 compute-0 systemd[1]: Started libpod-conmon-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope.
Dec 13 07:22:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.216293315 +0000 UTC m=+0.107246109 container init 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.221688585 +0000 UTC m=+0.112641359 container start 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.223338153 +0000 UTC m=+0.114290937 container attach 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.128798793 +0000 UTC m=+0.019751587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:22:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:32 compute-0 sudo[141874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plnevgjevjocdiynjkhemffuqpdzajrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610552.2354846-460-175367534740294/AnsiballZ_file.py'
Dec 13 07:22:32 compute-0 sudo[141874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:32 compute-0 python3.9[141876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:32 compute-0 sudo[141874]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:32 compute-0 lvm[141988]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:22:32 compute-0 lvm[141987]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:22:32 compute-0 lvm[141988]: VG ceph_vg1 finished
Dec 13 07:22:32 compute-0 lvm[141987]: VG ceph_vg0 finished
Dec 13 07:22:32 compute-0 lvm[141991]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:22:32 compute-0 lvm[141991]: VG ceph_vg2 finished
Dec 13 07:22:32 compute-0 wonderful_kirch[141735]: {}
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.889647757 +0000 UTC m=+0.780600531 container died 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:22:32 compute-0 systemd[1]: libpod-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope: Deactivated successfully.
Dec 13 07:22:32 compute-0 systemd[1]: libpod-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope: Consumed 1.017s CPU time.
Dec 13 07:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-08fa3beb7d57924b54e17ac3c72e5bc23bb316ccbf0bfa3dd98f49da725ae81c-merged.mount: Deactivated successfully.
Dec 13 07:22:32 compute-0 podman[141722]: 2025-12-13 07:22:32.914530808 +0000 UTC m=+0.805483582 container remove 9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:22:32 compute-0 systemd[1]: libpod-conmon-9ef07548b54809b544b4fe97ad0cc39724c849e2dd48efe485909f0a717a5ecc.scope: Deactivated successfully.
Dec 13 07:22:32 compute-0 sudo[141585]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:22:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:22:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:22:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:22:32 compute-0 sudo[142028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:22:32 compute-0 sudo[142028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:22:33 compute-0 sudo[142028]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:33 compute-0 sudo[142130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhpgzppxkhisffexbtjvhewmtclynldw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610552.7842076-468-161078544965777/AnsiballZ_stat.py'
Dec 13 07:22:33 compute-0 sudo[142130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:33 compute-0 python3.9[142132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:33 compute-0 sudo[142130]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:33 compute-0 ceph-mon[74928]: pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:22:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:22:33 compute-0 sudo[142253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbukdffeckztpxnsolnebnnaiihwsrlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610552.7842076-468-161078544965777/AnsiballZ_copy.py'
Dec 13 07:22:33 compute-0 sudo[142253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:33 compute-0 python3.9[142255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610552.7842076-468-161078544965777/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:33 compute-0 sudo[142253]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:34 compute-0 sudo[142405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlaktlfaghunosxyhlgqxpwjpfeyvubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610553.959214-485-47950883423975/AnsiballZ_file.py'
Dec 13 07:22:34 compute-0 sudo[142405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:34 compute-0 python3.9[142407]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:34 compute-0 sudo[142405]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:34 compute-0 sudo[142557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebmgdmwviienmnpounxgeejgdlyqgrfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610554.4789786-493-272967139376869/AnsiballZ_stat.py'
Dec 13 07:22:34 compute-0 sudo[142557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:34 compute-0 python3.9[142559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:22:34 compute-0 sudo[142557]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:35 compute-0 sudo[142680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlxxikkrxrsxdntuelfmhtiqjvsayvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610554.4789786-493-272967139376869/AnsiballZ_copy.py'
Dec 13 07:22:35 compute-0 sudo[142680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:35 compute-0 python3.9[142682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610554.4789786-493-272967139376869/.source.json _original_basename=.lvsilzj8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:35 compute-0 sudo[142680]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:35 compute-0 ceph-mon[74928]: pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:35 compute-0 sudo[142832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixnbyakduyduzlvvojkbobidfbvydcli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610555.4090025-508-262196597787964/AnsiballZ_file.py'
Dec 13 07:22:35 compute-0 sudo[142832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:35 compute-0 python3.9[142834]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:35 compute-0 sudo[142832]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:36 compute-0 sudo[142984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lowlyvzkqfrzzcpogqjtjjkrhclczutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610555.9420965-516-265305683840830/AnsiballZ_stat.py'
Dec 13 07:22:36 compute-0 sudo[142984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:36 compute-0 sudo[142984]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:36 compute-0 sudo[143107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmadwygzwhgdomxkisvatkogzcujjtna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610555.9420965-516-265305683840830/AnsiballZ_copy.py'
Dec 13 07:22:36 compute-0 sudo[143107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:36 compute-0 sudo[143107]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:37 compute-0 ceph-mon[74928]: pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:37 compute-0 sudo[143259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbthyqtvafuvnhfyuovmygqfcjpuoyhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610556.9961207-533-119254181024639/AnsiballZ_container_config_data.py'
Dec 13 07:22:37 compute-0 sudo[143259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:37 compute-0 python3.9[143261]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 13 07:22:37 compute-0 sudo[143259]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:38 compute-0 sudo[143411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskscbraqopknaygoanbtitgfwemtpjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610557.8268824-542-29441446040359/AnsiballZ_container_config_hash.py'
Dec 13 07:22:38 compute-0 sudo[143411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:22:38
Dec 13 07:22:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:22:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:22:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'vms']
Dec 13 07:22:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:22:38 compute-0 python3.9[143413]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 07:22:38 compute-0 sudo[143411]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:38 compute-0 sudo[143563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbccwzyssfzasekcfxfvgowqpnliftkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610558.5040486-551-236735787268797/AnsiballZ_podman_container_info.py'
Dec 13 07:22:38 compute-0 sudo[143563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:38 compute-0 python3.9[143565]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:22:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:22:39 compute-0 sudo[143563]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:39 compute-0 ceph-mon[74928]: pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:39 compute-0 sudo[143735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnyuaeekbsjwafawxhgdvhntrcoxgpfk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610559.5673513-564-147456067914797/AnsiballZ_edpm_container_manage.py'
Dec 13 07:22:39 compute-0 sudo[143735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:40 compute-0 python3[143737]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 07:22:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:41 compute-0 ceph-mon[74928]: pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:43 compute-0 ceph-mon[74928]: pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:45 compute-0 ceph-mon[74928]: pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:45 compute-0 podman[143748]: 2025-12-13 07:22:45.586237452 +0000 UTC m=+5.361082652 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 07:22:45 compute-0 podman[143847]: 2025-12-13 07:22:45.701485826 +0000 UTC m=+0.033874444 container create d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:22:45 compute-0 podman[143847]: 2025-12-13 07:22:45.687286355 +0000 UTC m=+0.019674993 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 07:22:45 compute-0 python3[143737]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 07:22:45 compute-0 sudo[143735]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:46 compute-0 sudo[144024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvechnhwypnwvskracryijltiuyizpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610565.9586327-572-145006699127308/AnsiballZ_stat.py'
Dec 13 07:22:46 compute-0 sudo[144024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:46 compute-0 python3.9[144026]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:22:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:46 compute-0 sudo[144024]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:46 compute-0 sudo[144178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqvpxzxyjaxwzianxnbagwihfqqbhbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610566.5561583-581-26494457868039/AnsiballZ_file.py'
Dec 13 07:22:46 compute-0 sudo[144178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:46 compute-0 python3.9[144180]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:46 compute-0 sudo[144178]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:47 compute-0 sudo[144254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbcechywyxhywdapxsidsopzabwotqsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610566.5561583-581-26494457868039/AnsiballZ_stat.py'
Dec 13 07:22:47 compute-0 sudo[144254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:47 compute-0 python3.9[144256]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:22:47 compute-0 sudo[144254]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:47 compute-0 ceph-mon[74928]: pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:47 compute-0 sudo[144405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orbiaritmcwksedcmsrhkkugrqocqvdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610567.3360138-581-20068041669165/AnsiballZ_copy.py'
Dec 13 07:22:47 compute-0 sudo[144405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:47 compute-0 python3.9[144407]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610567.3360138-581-20068041669165/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:22:47 compute-0 sudo[144405]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:48 compute-0 sudo[144481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtptkygbcijubzjuowsfvwdhzgwqwbao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610567.3360138-581-20068041669165/AnsiballZ_systemd.py'
Dec 13 07:22:48 compute-0 sudo[144481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:48 compute-0 python3.9[144483]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:22:48 compute-0 systemd[1]: Reloading.
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:48 compute-0 systemd-sysv-generator[144508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:22:48 compute-0 systemd-rc-local-generator[144504]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:22:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:22:48 compute-0 sudo[144481]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:48 compute-0 sudo[144592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooggyawvtnfmyikjalsmvayqwaibkwbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610567.3360138-581-20068041669165/AnsiballZ_systemd.py'
Dec 13 07:22:48 compute-0 sudo[144592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:49 compute-0 python3.9[144594]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:22:49 compute-0 systemd[1]: Reloading.
Dec 13 07:22:49 compute-0 systemd-rc-local-generator[144617]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:22:49 compute-0 systemd-sysv-generator[144620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:22:49 compute-0 systemd[1]: Starting ovn_controller container...
Dec 13 07:22:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e60b40e88c9d4273841e05a580f706923cb5a4635c1fb0bec6354585657969/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 13 07:22:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed.
Dec 13 07:22:49 compute-0 podman[144635]: 2025-12-13 07:22:49.420152805 +0000 UTC m=+0.087136491 container init d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + sudo -E kolla_set_configs
Dec 13 07:22:49 compute-0 ceph-mon[74928]: pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:49 compute-0 podman[144635]: 2025-12-13 07:22:49.442638715 +0000 UTC m=+0.109622381 container start d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:22:49 compute-0 edpm-start-podman-container[144635]: ovn_controller
Dec 13 07:22:49 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 13 07:22:49 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 13 07:22:49 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 13 07:22:49 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 13 07:22:49 compute-0 systemd[144676]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 13 07:22:49 compute-0 edpm-start-podman-container[144634]: Creating additional drop-in dependency for "ovn_controller" (d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed)
Dec 13 07:22:49 compute-0 podman[144654]: 2025-12-13 07:22:49.523217503 +0000 UTC m=+0.072472828 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 07:22:49 compute-0 systemd[1]: d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed-304b4c4e52c8cee3.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 07:22:49 compute-0 systemd[1]: d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed-304b4c4e52c8cee3.service: Failed with result 'exit-code'.
Dec 13 07:22:49 compute-0 systemd[1]: Reloading.
Dec 13 07:22:49 compute-0 systemd[144676]: Queued start job for default target Main User Target.
Dec 13 07:22:49 compute-0 systemd[144676]: Created slice User Application Slice.
Dec 13 07:22:49 compute-0 systemd[144676]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 13 07:22:49 compute-0 systemd[144676]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 07:22:49 compute-0 systemd[144676]: Reached target Paths.
Dec 13 07:22:49 compute-0 systemd[144676]: Reached target Timers.
Dec 13 07:22:49 compute-0 systemd[144676]: Starting D-Bus User Message Bus Socket...
Dec 13 07:22:49 compute-0 systemd[144676]: Starting Create User's Volatile Files and Directories...
Dec 13 07:22:49 compute-0 systemd-sysv-generator[144725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:22:49 compute-0 systemd[144676]: Listening on D-Bus User Message Bus Socket.
Dec 13 07:22:49 compute-0 systemd[144676]: Reached target Sockets.
Dec 13 07:22:49 compute-0 systemd[144676]: Finished Create User's Volatile Files and Directories.
Dec 13 07:22:49 compute-0 systemd[144676]: Reached target Basic System.
Dec 13 07:22:49 compute-0 systemd[144676]: Reached target Main User Target.
Dec 13 07:22:49 compute-0 systemd[144676]: Startup finished in 108ms.
Dec 13 07:22:49 compute-0 systemd-rc-local-generator[144721]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:22:49 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 13 07:22:49 compute-0 systemd[1]: Started ovn_controller container.
Dec 13 07:22:49 compute-0 systemd[1]: Started Session c1 of User root.
Dec 13 07:22:49 compute-0 sudo[144592]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:49 compute-0 ovn_controller[144647]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 07:22:49 compute-0 ovn_controller[144647]: INFO:__main__:Validating config file
Dec 13 07:22:49 compute-0 ovn_controller[144647]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 07:22:49 compute-0 ovn_controller[144647]: INFO:__main__:Writing out command to execute
Dec 13 07:22:49 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: ++ cat /run_command
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + ARGS=
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + sudo kolla_copy_cacerts
Dec 13 07:22:49 compute-0 systemd[1]: Started Session c2 of User root.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + [[ ! -n '' ]]
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + . kolla_extend_start
Dec 13 07:22:49 compute-0 ovn_controller[144647]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + umask 0022
Dec 13 07:22:49 compute-0 ovn_controller[144647]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 13 07:22:49 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.8944] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.8950] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <warn>  [1765610569.8951] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.8959] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.8964] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.8967] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 07:22:49 compute-0 kernel: br-int: entered promiscuous mode
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00019|main|INFO|OVS feature set changed, force recompute.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 07:22:49 compute-0 ovn_controller[144647]: 2025-12-13T07:22:49Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.9149] manager: (ovn-d8e85b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 13 07:22:49 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.9297] device (genev_sys_6081): carrier: link connected
Dec 13 07:22:49 compute-0 NetworkManager[48896]: <info>  [1765610569.9299] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 13 07:22:49 compute-0 systemd-udevd[144798]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 07:22:49 compute-0 systemd-udevd[144799]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 07:22:50 compute-0 sudo[144906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzvoodgtkrrvstjucrutlxcnnjjbivd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610569.9363601-609-29610926997702/AnsiballZ_command.py'
Dec 13 07:22:50 compute-0 sudo[144906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:50 compute-0 python3.9[144908]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:50 compute-0 ovs-vsctl[144909]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 13 07:22:50 compute-0 sudo[144906]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:50 compute-0 sudo[145059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfqdafcndsoetsnqbepwrkmlkqpppgef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610570.4773664-617-206925428289840/AnsiballZ_command.py'
Dec 13 07:22:50 compute-0 sudo[145059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:50 compute-0 python3.9[145061]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:50 compute-0 ovs-vsctl[145063]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 13 07:22:50 compute-0 sudo[145059]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:51 compute-0 sudo[145214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxcamdbubuvmaxwxxbvjanfwmceveii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610571.1640828-631-117916866265516/AnsiballZ_command.py'
Dec 13 07:22:51 compute-0 sudo[145214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:51 compute-0 ceph-mon[74928]: pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:51 compute-0 python3.9[145216]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:22:51 compute-0 ovs-vsctl[145217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 13 07:22:51 compute-0 sudo[145214]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:51 compute-0 sshd-session[133517]: Connection closed by 192.168.122.30 port 55324
Dec 13 07:22:51 compute-0 sshd-session[133514]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:22:51 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 13 07:22:51 compute-0 systemd[1]: session-48.scope: Consumed 43.474s CPU time.
Dec 13 07:22:51 compute-0 systemd-logind[745]: Session 48 logged out. Waiting for processes to exit.
Dec 13 07:22:51 compute-0 systemd-logind[745]: Removed session 48.
Dec 13 07:22:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:53 compute-0 ceph-mon[74928]: pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:55 compute-0 ceph-mon[74928]: pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:57 compute-0 sshd-session[145242]: Accepted publickey for zuul from 192.168.122.30 port 38534 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:22:57 compute-0 systemd-logind[745]: New session 50 of user zuul.
Dec 13 07:22:57 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 13 07:22:57 compute-0 sshd-session[145242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:22:57 compute-0 ceph-mon[74928]: pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:22:58 compute-0 python3.9[145395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:22:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:58 compute-0 sudo[145549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounupagvejszlcoblmempqivcnxnohbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610578.4931533-34-10791133896418/AnsiballZ_file.py'
Dec 13 07:22:58 compute-0 sudo[145549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:59 compute-0 python3.9[145551]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:59 compute-0 sudo[145549]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:59 compute-0 sudo[145701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthkwmhpqdqapmbdnpbzfkyhxgdzepnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610579.1274276-34-181604916604167/AnsiballZ_file.py'
Dec 13 07:22:59 compute-0 sudo[145701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:59 compute-0 ceph-mon[74928]: pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:22:59 compute-0 python3.9[145703]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:22:59 compute-0 sudo[145701]: pam_unix(sudo:session): session closed for user root
Dec 13 07:22:59 compute-0 sudo[145853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhbyvlcchtynbpukncturiznuagwwjij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610579.627492-34-120369172390680/AnsiballZ_file.py'
Dec 13 07:22:59 compute-0 sudo[145853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:22:59 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 13 07:22:59 compute-0 systemd[144676]: Activating special unit Exit the Session...
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped target Main User Target.
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped target Basic System.
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped target Paths.
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped target Sockets.
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped target Timers.
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 07:22:59 compute-0 systemd[144676]: Closed D-Bus User Message Bus Socket.
Dec 13 07:22:59 compute-0 systemd[144676]: Stopped Create User's Volatile Files and Directories.
Dec 13 07:22:59 compute-0 systemd[144676]: Removed slice User Application Slice.
Dec 13 07:22:59 compute-0 systemd[144676]: Reached target Shutdown.
Dec 13 07:22:59 compute-0 systemd[144676]: Finished Exit the Session.
Dec 13 07:22:59 compute-0 systemd[144676]: Reached target Exit the Session.
Dec 13 07:22:59 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 13 07:22:59 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 13 07:22:59 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 13 07:22:59 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 13 07:22:59 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 13 07:22:59 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 13 07:22:59 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 13 07:22:59 compute-0 python3.9[145855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:00 compute-0 sudo[145853]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:00 compute-0 sudo[146006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hazhypzqsqsvkvvobqascarouzuoffox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610580.116793-34-82986249140964/AnsiballZ_file.py'
Dec 13 07:23:00 compute-0 sudo[146006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:00 compute-0 python3.9[146008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:00 compute-0 sudo[146006]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:00 compute-0 sudo[146158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwoqsfkjgusvqihsirspeudgnjqjjpoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610580.7097268-34-210987789814649/AnsiballZ_file.py'
Dec 13 07:23:00 compute-0 sudo[146158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:01 compute-0 python3.9[146160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:01 compute-0 sudo[146158]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:01 compute-0 ceph-mon[74928]: pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:01 compute-0 python3.9[146310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:23:02 compute-0 sudo[146460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usngkprmumzmjuoplbwrrgihhhgcqjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610581.8134856-78-176081490502422/AnsiballZ_seboolean.py'
Dec 13 07:23:02 compute-0 sudo[146460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:02 compute-0 python3.9[146462]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 07:23:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:02 compute-0 sudo[146460]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:03 compute-0 python3.9[146612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:03 compute-0 ceph-mon[74928]: pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:03 compute-0 python3.9[146733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610582.9269547-86-256388414577141/.source follow=False _original_basename=haproxy.j2 checksum=d225e0e1c34f765c55f17e757e326dba55238d01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:04 compute-0 python3.9[146883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:04 compute-0 python3.9[147004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610584.0556464-101-203307664463491/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:05 compute-0 sudo[147154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teebywqweickfcaqrbatduqwukvyjzme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610585.0039008-118-193484372680558/AnsiballZ_setup.py'
Dec 13 07:23:05 compute-0 sudo[147154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:05 compute-0 ceph-mon[74928]: pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:05 compute-0 python3.9[147156]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:23:06 compute-0 sudo[147154]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:06 compute-0 sudo[147241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgecgyhktjfpaazcbzyjvdtmntiysbcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610585.0039008-118-193484372680558/AnsiballZ_dnf.py'
Dec 13 07:23:06 compute-0 sudo[147241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:06 compute-0 python3.9[147243]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:23:07 compute-0 ceph-mon[74928]: pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:07 compute-0 sudo[147241]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:08 compute-0 sudo[147395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sngkovbrjvbnvujjwojmyaduybgyjorn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610587.8287454-130-195177494699111/AnsiballZ_systemd.py'
Dec 13 07:23:08 compute-0 sudo[147395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:08 compute-0 python3.9[147397]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:23:08 compute-0 sudo[147395]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:23:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:23:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:23:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:23:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:23:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:23:09 compute-0 python3.9[147550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:09 compute-0 ceph-mon[74928]: pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:09 compute-0 python3.9[147671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610588.7667036-138-68132395111239/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:09 compute-0 python3.9[147821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:10 compute-0 python3.9[147942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610589.6388168-138-114925049156814/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:11 compute-0 python3.9[148092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:11 compute-0 ceph-mon[74928]: pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:11 compute-0 python3.9[148213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610590.8967204-182-17815837219079/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:12 compute-0 python3.9[148363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:12 compute-0 python3.9[148484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610591.6803172-182-37697643639377/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:12 compute-0 python3.9[148634]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:23:13 compute-0 sudo[148786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzejdxpozayhfxlcdhqtflicjfvmgzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610592.996156-220-30667006729206/AnsiballZ_file.py'
Dec 13 07:23:13 compute-0 sudo[148786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:13 compute-0 python3.9[148788]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:13 compute-0 sudo[148786]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:13 compute-0 ceph-mon[74928]: pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:13 compute-0 sudo[148938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubblgyjzmrtttygrkpvcrcoovwctzfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610593.4756145-228-31051311347353/AnsiballZ_stat.py'
Dec 13 07:23:13 compute-0 sudo[148938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:13 compute-0 python3.9[148940]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:13 compute-0 sudo[148938]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:13 compute-0 sudo[149016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mloftcnjqczoatyexfhuktijjvdjomri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610593.4756145-228-31051311347353/AnsiballZ_file.py'
Dec 13 07:23:13 compute-0 sudo[149016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:14 compute-0 python3.9[149018]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:14 compute-0 sudo[149016]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:14 compute-0 sudo[149168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxtempjyngvqgzntdkevkdqtbecafgwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610594.2648716-228-14010225272380/AnsiballZ_stat.py'
Dec 13 07:23:14 compute-0 sudo[149168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:14 compute-0 python3.9[149170]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:14 compute-0 sudo[149168]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:14 compute-0 sudo[149246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwyvzrwjtvdxywekrmqhjhvluuuxfkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610594.2648716-228-14010225272380/AnsiballZ_file.py'
Dec 13 07:23:14 compute-0 sudo[149246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:14 compute-0 python3.9[149248]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:14 compute-0 sudo[149246]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:15 compute-0 sudo[149398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjibmvgtuqtumytyjqgmjnjwwtnrjrml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610595.0665603-251-98636805491916/AnsiballZ_file.py'
Dec 13 07:23:15 compute-0 sudo[149398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:15 compute-0 ceph-mon[74928]: pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:15 compute-0 python3.9[149400]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:15 compute-0 sudo[149398]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:15 compute-0 sudo[149550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbacklazpkdxlcacwlmycpyzcsaqjnje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610595.6438804-259-189604209760605/AnsiballZ_stat.py'
Dec 13 07:23:15 compute-0 sudo[149550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:15 compute-0 python3.9[149552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:16 compute-0 sudo[149550]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:16 compute-0 sudo[149628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doggdwelzkxtjduuxnbmiegafzpgehvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610595.6438804-259-189604209760605/AnsiballZ_file.py'
Dec 13 07:23:16 compute-0 sudo[149628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:16 compute-0 python3.9[149630]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:16 compute-0 sudo[149628]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:16 compute-0 sudo[149780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmbrpcmyzdurevdawvqersqmirhexqon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610596.4379754-271-125240723959951/AnsiballZ_stat.py'
Dec 13 07:23:16 compute-0 sudo[149780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:16 compute-0 python3.9[149782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:16 compute-0 sudo[149780]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:16 compute-0 sudo[149858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzyqgpgjgmylbelhxypqwazzppuawzjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610596.4379754-271-125240723959951/AnsiballZ_file.py'
Dec 13 07:23:16 compute-0 sudo[149858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:17 compute-0 python3.9[149860]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:17 compute-0 sudo[149858]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:17 compute-0 sudo[150010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxpslunhbcvdduirnfuoysqtaudtqcls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610597.2859776-283-2349224889048/AnsiballZ_systemd.py'
Dec 13 07:23:17 compute-0 sudo[150010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:17 compute-0 ceph-mon[74928]: pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:17 compute-0 python3.9[150012]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:17 compute-0 systemd[1]: Reloading.
Dec 13 07:23:17 compute-0 systemd-rc-local-generator[150035]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:23:17 compute-0 systemd-sysv-generator[150038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:23:18 compute-0 sudo[150010]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:18 compute-0 sudo[150199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqnjbdqvmmldcbbxpbtzysaukqjslwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610598.124661-291-119089380589082/AnsiballZ_stat.py'
Dec 13 07:23:18 compute-0 sudo[150199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:18 compute-0 python3.9[150201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:18 compute-0 sudo[150199]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:18 compute-0 sudo[150277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjadorobkmvnsxkkwhicahwulzmjtst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610598.124661-291-119089380589082/AnsiballZ_file.py'
Dec 13 07:23:18 compute-0 sudo[150277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:18 compute-0 python3.9[150279]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:18 compute-0 sudo[150277]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:19 compute-0 sudo[150429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemopfxbtsrmqclcjnlqvngvxzvxreng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610598.9896069-303-139533368102619/AnsiballZ_stat.py'
Dec 13 07:23:19 compute-0 sudo[150429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:19 compute-0 python3.9[150431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:19 compute-0 sudo[150429]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:19 compute-0 ceph-mon[74928]: pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:19 compute-0 sudo[150507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obrfasvgnwknriltaqhdjdqwgrcwizwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610598.9896069-303-139533368102619/AnsiballZ_file.py'
Dec 13 07:23:19 compute-0 sudo[150507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:19 compute-0 ovn_controller[144647]: 2025-12-13T07:23:19Z|00025|memory|INFO|17280 kB peak resident set size after 29.7 seconds
Dec 13 07:23:19 compute-0 ovn_controller[144647]: 2025-12-13T07:23:19Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 13 07:23:19 compute-0 podman[150509]: 2025-12-13 07:23:19.625499109 +0000 UTC m=+0.066293443 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:23:19 compute-0 python3.9[150510]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:19 compute-0 sudo[150507]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:20 compute-0 sudo[150682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcytjprlgdaqcbcqqpifrotclikixnha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610599.8475213-315-77686629486530/AnsiballZ_systemd.py'
Dec 13 07:23:20 compute-0 sudo[150682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:20 compute-0 python3.9[150684]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:20 compute-0 systemd[1]: Reloading.
Dec 13 07:23:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:20 compute-0 systemd-rc-local-generator[150704]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:23:20 compute-0 systemd-sysv-generator[150708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:23:20 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 07:23:20 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 07:23:20 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 07:23:20 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 07:23:20 compute-0 sudo[150682]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:20 compute-0 sudo[150875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjclpjqgkxqkydyaqfpaoqnfwimfyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610600.7701774-325-103086631283156/AnsiballZ_file.py'
Dec 13 07:23:20 compute-0 sudo[150875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:21 compute-0 python3.9[150877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:21 compute-0 sudo[150875]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:21 compute-0 sudo[151027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffbvozkxglrhaewpvynhsqnjhnoyhcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610601.249325-333-50749771526750/AnsiballZ_stat.py'
Dec 13 07:23:21 compute-0 sudo[151027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:21 compute-0 ceph-mon[74928]: pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:21 compute-0 python3.9[151029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:21 compute-0 sudo[151027]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:21 compute-0 sudo[151150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnbhlsdsgeyeplqcyqgfmbbotwymnust ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610601.249325-333-50749771526750/AnsiballZ_copy.py'
Dec 13 07:23:21 compute-0 sudo[151150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:21 compute-0 python3.9[151152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610601.249325-333-50749771526750/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:22 compute-0 sudo[151150]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:22 compute-0 sudo[151302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfjgdrsapkfejeoegswejvhrsiehyfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610602.2437506-350-225122483282893/AnsiballZ_file.py'
Dec 13 07:23:22 compute-0 sudo[151302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:22 compute-0 python3.9[151304]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:23:22 compute-0 sudo[151302]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:23:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2058 writes, 9119 keys, 2058 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2058 writes, 2058 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2058 writes, 9119 keys, 2058 commit groups, 1.0 writes per commit group, ingest: 12.31 MB, 0.02 MB/s
                                           Interval WAL: 2058 writes, 2058 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    401.7      0.02              0.02         3    0.007       0      0       0.0       0.0
                                             L6      1/0    6.57 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    492.9    428.9      0.03              0.03         2    0.016    7166    731       0.0       0.0
                                            Sum      1/0    6.57 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    296.5    418.1      0.05              0.04         5    0.011    7166    731       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    302.5    425.7      0.05              0.04         4    0.013    7166    731       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    492.9    428.9      0.03              0.03         2    0.016    7166    731       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    420.5      0.02              0.02         2    0.010       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5642ba289a30#2 capacity: 308.00 MB usage: 690.88 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(37,603.69 KB,0.191409%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.89 KB,0.0186722%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 07:23:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:22 compute-0 sudo[151454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hveqjspcbvffmxdvleuabiqeynrfmkyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610602.7405534-358-280845959330594/AnsiballZ_stat.py'
Dec 13 07:23:22 compute-0 sudo[151454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:23 compute-0 python3.9[151456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:23:23 compute-0 sudo[151454]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:23 compute-0 sudo[151577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvztbfueghytublwzkerwmiljownttay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610602.7405534-358-280845959330594/AnsiballZ_copy.py'
Dec 13 07:23:23 compute-0 sudo[151577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:23 compute-0 python3.9[151579]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610602.7405534-358-280845959330594/.source.json _original_basename=.ufoz9njg follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:23 compute-0 sudo[151577]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:23 compute-0 ceph-mon[74928]: pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:23 compute-0 sudo[151729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdfraqfpfaeqsoyfvctynyhwdlwlrwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610603.5612314-373-38720455606060/AnsiballZ_file.py'
Dec 13 07:23:23 compute-0 sudo[151729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:23 compute-0 python3.9[151731]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:23 compute-0 sudo[151729]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:24 compute-0 sudo[151881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkyjwuseiqchtnsnoodicglrrgpnxhkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610604.033614-381-239002924444236/AnsiballZ_stat.py'
Dec 13 07:23:24 compute-0 sudo[151881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:24 compute-0 sudo[151881]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:24 compute-0 sudo[152004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frzjnleosysjgbsyuamdczasbjywjitc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610604.033614-381-239002924444236/AnsiballZ_copy.py'
Dec 13 07:23:24 compute-0 sudo[152004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:24 compute-0 sudo[152004]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:25 compute-0 sudo[152156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhqwgayktcopuwstaqzxqgozanozuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610605.0602906-398-228828269754304/AnsiballZ_container_config_data.py'
Dec 13 07:23:25 compute-0 sudo[152156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:25 compute-0 python3.9[152158]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 13 07:23:25 compute-0 sudo[152156]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:25 compute-0 ceph-mon[74928]: pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:25 compute-0 sudo[152308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvwjipeozobnvdllaiqawsvkaihyreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610605.6664388-407-17220389719961/AnsiballZ_container_config_hash.py'
Dec 13 07:23:25 compute-0 sudo[152308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:26 compute-0 python3.9[152310]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 07:23:26 compute-0 sudo[152308]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:26 compute-0 sudo[152460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwlbvpwziiaorfmfnfzbmmlzunufiumw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610606.2850904-416-14256298927930/AnsiballZ_podman_container_info.py'
Dec 13 07:23:26 compute-0 sudo[152460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:26 compute-0 python3.9[152462]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 07:23:26 compute-0 sudo[152460]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:27 compute-0 ceph-mon[74928]: pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:27 compute-0 sudo[152633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afsgidsvootrzmmyehrjjnfbquciemsd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610607.3022268-429-260426777220040/AnsiballZ_edpm_container_manage.py'
Dec 13 07:23:27 compute-0 sudo[152633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:27 compute-0 python3[152635]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 07:23:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:29 compute-0 ceph-mon[74928]: pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:31 compute-0 ceph-mon[74928]: pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:33 compute-0 sudo[152702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:23:33 compute-0 sudo[152702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:33 compute-0 sudo[152702]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:33 compute-0 sudo[152727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:23:33 compute-0 sudo[152727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:33 compute-0 ceph-mon[74928]: pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:35 compute-0 ceph-mon[74928]: pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:36 compute-0 podman[152646]: 2025-12-13 07:23:36.043046088 +0000 UTC m=+8.114068519 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 07:23:36 compute-0 sudo[152727]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:23:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:23:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:23:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:23:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:23:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:23:36 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:23:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:23:36 compute-0 sudo[152824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:23:36 compute-0 sudo[152824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:36 compute-0 sudo[152824]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:36 compute-0 podman[152823]: 2025-12-13 07:23:36.174246493 +0000 UTC m=+0.035269186 container create 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:23:36 compute-0 podman[152823]: 2025-12-13 07:23:36.158613302 +0000 UTC m=+0.019636014 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 07:23:36 compute-0 python3[152635]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 07:23:36 compute-0 sudo[152857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:23:36 compute-0 sudo[152857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:36 compute-0 sudo[152633]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.451618472 +0000 UTC m=+0.031459755 container create 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:23:36 compute-0 systemd[1]: Started libpod-conmon-6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf.scope.
Dec 13 07:23:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.505320704 +0000 UTC m=+0.085161997 container init 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.510592394 +0000 UTC m=+0.090433677 container start 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.513017179 +0000 UTC m=+0.092858482 container attach 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:23:36 compute-0 great_mirzakhani[152999]: 167 167
Dec 13 07:23:36 compute-0 systemd[1]: libpod-6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf.scope: Deactivated successfully.
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.519185259 +0000 UTC m=+0.099026542 container died 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a407f90e8c3fd63a8477ee2e8aeba81a8835e01306fae3d93e9b6bcc1f283ed1-merged.mount: Deactivated successfully.
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.439369854 +0000 UTC m=+0.019211157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:23:36 compute-0 podman[152957]: 2025-12-13 07:23:36.543308262 +0000 UTC m=+0.123149545 container remove 6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 07:23:36 compute-0 systemd[1]: libpod-conmon-6ca649ac93e7c0bdc1ca24e09d251b45cdf080432105b49de02cc6c09a2638cf.scope: Deactivated successfully.
Dec 13 07:23:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:23:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:23:36 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:23:36 compute-0 sudo[153089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijkmvuhjtmlrcvrhprirpyaqrhcojqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610616.4096336-437-171384731200867/AnsiballZ_stat.py'
Dec 13 07:23:36 compute-0 sudo[153089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:36 compute-0 podman[153097]: 2025-12-13 07:23:36.67125046 +0000 UTC m=+0.032010588 container create 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:23:36 compute-0 systemd[1]: Started libpod-conmon-5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57.scope.
Dec 13 07:23:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:36 compute-0 podman[153097]: 2025-12-13 07:23:36.729216353 +0000 UTC m=+0.089976511 container init 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:23:36 compute-0 podman[153097]: 2025-12-13 07:23:36.734914272 +0000 UTC m=+0.095674410 container start 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:23:36 compute-0 podman[153097]: 2025-12-13 07:23:36.737925206 +0000 UTC m=+0.098685343 container attach 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:23:36 compute-0 podman[153097]: 2025-12-13 07:23:36.658107898 +0000 UTC m=+0.018868035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:23:36 compute-0 python3.9[153096]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:23:36 compute-0 sudo[153089]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:37 compute-0 agitated_hopper[153110]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:23:37 compute-0 agitated_hopper[153110]: --> All data devices are unavailable
Dec 13 07:23:37 compute-0 systemd[1]: libpod-5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57.scope: Deactivated successfully.
Dec 13 07:23:37 compute-0 podman[153097]: 2025-12-13 07:23:37.146865089 +0000 UTC m=+0.507625236 container died 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc99e508cec3f779af9e1b192067cc645d0ee5b4457f0b03714153a5de02a4a6-merged.mount: Deactivated successfully.
Dec 13 07:23:37 compute-0 podman[153097]: 2025-12-13 07:23:37.170239278 +0000 UTC m=+0.530999414 container remove 5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:23:37 compute-0 systemd[1]: libpod-conmon-5ead1753c8904d8eae493e74b82303095cffcb2dc96c4a0630c360ee11be0f57.scope: Deactivated successfully.
Dec 13 07:23:37 compute-0 sudo[153290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsxnzvejtrbzcalnyptcvepecsykcci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610616.9970798-446-263562129407442/AnsiballZ_file.py'
Dec 13 07:23:37 compute-0 sudo[153290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:37 compute-0 sudo[152857]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:37 compute-0 sudo[153293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:23:37 compute-0 sudo[153293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:37 compute-0 sudo[153293]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:37 compute-0 sudo[153318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:23:37 compute-0 sudo[153318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:37 compute-0 python3.9[153292]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:37 compute-0 sudo[153290]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.527689916 +0000 UTC m=+0.029872760 container create 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:23:37 compute-0 sudo[153435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deuypwkibkgmenmhxnzrtkqchcczdbnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610616.9970798-446-263562129407442/AnsiballZ_stat.py'
Dec 13 07:23:37 compute-0 sudo[153435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:37 compute-0 systemd[1]: Started libpod-conmon-3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428.scope.
Dec 13 07:23:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.580328734 +0000 UTC m=+0.082511609 container init 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.585491941 +0000 UTC m=+0.087674796 container start 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.58732142 +0000 UTC m=+0.089504275 container attach 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:23:37 compute-0 zen_rubin[153442]: 167 167
Dec 13 07:23:37 compute-0 systemd[1]: libpod-3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428.scope: Deactivated successfully.
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.589854457 +0000 UTC m=+0.092037313 container died 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:23:37 compute-0 ceph-mon[74928]: pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d2d8ece24fee494c43349d6d723082d6de89c4084429be2a4948aa94fb6b22a-merged.mount: Deactivated successfully.
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.515232657 +0000 UTC m=+0.017415532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:23:37 compute-0 podman[153404]: 2025-12-13 07:23:37.613922287 +0000 UTC m=+0.116105142 container remove 3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:23:37 compute-0 systemd[1]: libpod-conmon-3ce948461d85e3c39cf801f8605e850fab1f8915813748ec659bd05b873a2428.scope: Deactivated successfully.
Dec 13 07:23:37 compute-0 python3.9[153439]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:23:37 compute-0 sudo[153435]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:37 compute-0 podman[153464]: 2025-12-13 07:23:37.743493127 +0000 UTC m=+0.032767055 container create bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:23:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:37 compute-0 systemd[1]: Started libpod-conmon-bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2.scope.
Dec 13 07:23:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:37 compute-0 podman[153464]: 2025-12-13 07:23:37.799085812 +0000 UTC m=+0.088359761 container init bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 07:23:37 compute-0 podman[153464]: 2025-12-13 07:23:37.806245853 +0000 UTC m=+0.095519781 container start bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:23:37 compute-0 podman[153464]: 2025-12-13 07:23:37.808734908 +0000 UTC m=+0.098008856 container attach bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:23:37 compute-0 podman[153464]: 2025-12-13 07:23:37.729805042 +0000 UTC m=+0.019078990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:23:38 compute-0 awesome_williams[153500]: {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:     "0": [
Dec 13 07:23:38 compute-0 awesome_williams[153500]:         {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "devices": [
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "/dev/loop3"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             ],
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_name": "ceph_lv0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_size": "21470642176",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "name": "ceph_lv0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "tags": {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cluster_name": "ceph",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.crush_device_class": "",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.encrypted": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.objectstore": "bluestore",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osd_id": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.type": "block",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.vdo": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.with_tpm": "0"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             },
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "type": "block",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "vg_name": "ceph_vg0"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:         }
Dec 13 07:23:38 compute-0 awesome_williams[153500]:     ],
Dec 13 07:23:38 compute-0 awesome_williams[153500]:     "1": [
Dec 13 07:23:38 compute-0 awesome_williams[153500]:         {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "devices": [
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "/dev/loop4"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             ],
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_name": "ceph_lv1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_size": "21470642176",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "name": "ceph_lv1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "tags": {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cluster_name": "ceph",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.crush_device_class": "",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.encrypted": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.objectstore": "bluestore",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osd_id": "1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.type": "block",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.vdo": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.with_tpm": "0"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             },
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "type": "block",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "vg_name": "ceph_vg1"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:         }
Dec 13 07:23:38 compute-0 awesome_williams[153500]:     ],
Dec 13 07:23:38 compute-0 awesome_williams[153500]:     "2": [
Dec 13 07:23:38 compute-0 awesome_williams[153500]:         {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "devices": [
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "/dev/loop5"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             ],
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_name": "ceph_lv2",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_size": "21470642176",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "name": "ceph_lv2",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "tags": {
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.cluster_name": "ceph",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.crush_device_class": "",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.encrypted": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.objectstore": "bluestore",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osd_id": "2",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.type": "block",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.vdo": "0",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:                 "ceph.with_tpm": "0"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             },
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "type": "block",
Dec 13 07:23:38 compute-0 awesome_williams[153500]:             "vg_name": "ceph_vg2"
Dec 13 07:23:38 compute-0 awesome_williams[153500]:         }
Dec 13 07:23:38 compute-0 awesome_williams[153500]:     ]
Dec 13 07:23:38 compute-0 awesome_williams[153500]: }
Dec 13 07:23:38 compute-0 systemd[1]: libpod-bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2.scope: Deactivated successfully.
Dec 13 07:23:38 compute-0 podman[153464]: 2025-12-13 07:23:38.070784308 +0000 UTC m=+0.360058246 container died bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f78ba06436d8488e9b400dcf0c5d6ad976f2a88294b7ea1aa42acd73c015653-merged.mount: Deactivated successfully.
Dec 13 07:23:38 compute-0 podman[153464]: 2025-12-13 07:23:38.094159559 +0000 UTC m=+0.383433488 container remove bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_williams, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:23:38 compute-0 systemd[1]: libpod-conmon-bcce2f4996edd48540019d060d5ec3afc9db84005d1a4dcf587db1ef5423baa2.scope: Deactivated successfully.
Dec 13 07:23:38 compute-0 sudo[153318]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:38 compute-0 sudo[153595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:23:38 compute-0 sudo[153595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:38 compute-0 sudo[153595]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:38 compute-0 sudo[153644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:23:38 compute-0 sudo[153644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:23:38
Dec 13 07:23:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:23:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:23:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Dec 13 07:23:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:23:38 compute-0 sudo[153695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfyaxlpnyxlgsmngopumahcchryudfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610617.773155-446-281226908936130/AnsiballZ_copy.py'
Dec 13 07:23:38 compute-0 sudo[153695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:38 compute-0 python3.9[153697]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610617.773155-446-281226908936130/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:38 compute-0 sudo[153695]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.439539833 +0000 UTC m=+0.030321932 container create 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:23:38 compute-0 systemd[1]: Started libpod-conmon-9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69.scope.
Dec 13 07:23:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.488074972 +0000 UTC m=+0.078857081 container init 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.493317727 +0000 UTC m=+0.084099817 container start 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.494513389 +0000 UTC m=+0.085295478 container attach 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 07:23:38 compute-0 elegant_gauss[153741]: 167 167
Dec 13 07:23:38 compute-0 systemd[1]: libpod-9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69.scope: Deactivated successfully.
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.497264805 +0000 UTC m=+0.088046895 container died 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.514907723 +0000 UTC m=+0.105689812 container remove 9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:23:38 compute-0 podman[153708]: 2025-12-13 07:23:38.428360069 +0000 UTC m=+0.019142178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9089c5be8189707c49577812b72d6f731277245db29a8db619f64417fd0336c8-merged.mount: Deactivated successfully.
Dec 13 07:23:38 compute-0 systemd[1]: libpod-conmon-9279eaea325b9c0f6d230166641b85bbb16210ec883f5052e9737af03579af69.scope: Deactivated successfully.
Dec 13 07:23:38 compute-0 sudo[153814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fibibmrnmimrelosplkrgcohcpzznsqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610617.773155-446-281226908936130/AnsiballZ_systemd.py'
Dec 13 07:23:38 compute-0 sudo[153814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:38 compute-0 podman[153820]: 2025-12-13 07:23:38.644281384 +0000 UTC m=+0.028695773 container create 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:23:38 compute-0 systemd[1]: Started libpod-conmon-664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651.scope.
Dec 13 07:23:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:38 compute-0 podman[153820]: 2025-12-13 07:23:38.713262833 +0000 UTC m=+0.097677234 container init 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec 13 07:23:38 compute-0 podman[153820]: 2025-12-13 07:23:38.717756215 +0000 UTC m=+0.102170604 container start 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:23:38 compute-0 podman[153820]: 2025-12-13 07:23:38.719474264 +0000 UTC m=+0.103888655 container attach 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 07:23:38 compute-0 podman[153820]: 2025-12-13 07:23:38.633070732 +0000 UTC m=+0.017485142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:23:38 compute-0 python3.9[153822]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:23:38 compute-0 systemd[1]: Reloading.
Dec 13 07:23:38 compute-0 systemd-rc-local-generator[153874]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:23:38 compute-0 systemd-sysv-generator[153877]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:23:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:23:39 compute-0 sudo[153814]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:39 compute-0 lvm[153995]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:23:39 compute-0 lvm[153996]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:23:39 compute-0 lvm[153993]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:23:39 compute-0 lvm[153996]: VG ceph_vg2 finished
Dec 13 07:23:39 compute-0 lvm[153995]: VG ceph_vg1 finished
Dec 13 07:23:39 compute-0 lvm[153993]: VG ceph_vg0 finished
Dec 13 07:23:39 compute-0 nice_chatelet[153834]: {}
Dec 13 07:23:39 compute-0 systemd[1]: libpod-664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651.scope: Deactivated successfully.
Dec 13 07:23:39 compute-0 podman[153820]: 2025-12-13 07:23:39.308877162 +0000 UTC m=+0.693291562 container died 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b647745b8acc763a82680e4b4bd27ab0f1fa67855cefd712ac5adee41617ce2-merged.mount: Deactivated successfully.
Dec 13 07:23:39 compute-0 sudo[154028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvhbrkgdvtylzzoejsonqdynzlsekgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610617.773155-446-281226908936130/AnsiballZ_systemd.py'
Dec 13 07:23:39 compute-0 sudo[154028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:39 compute-0 podman[153820]: 2025-12-13 07:23:39.334346348 +0000 UTC m=+0.718760738 container remove 664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:23:39 compute-0 systemd[1]: libpod-conmon-664f92f46d50855f365cf5239ac99d63653c340284dbd79bf6456771b39a3651.scope: Deactivated successfully.
Dec 13 07:23:39 compute-0 sudo[153644]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:23:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:23:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:23:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:23:39 compute-0 sudo[154039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:23:39 compute-0 sudo[154039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:23:39 compute-0 sudo[154039]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:39 compute-0 python3.9[154036]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:39 compute-0 ceph-mon[74928]: pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:39 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:23:39 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:23:39 compute-0 systemd[1]: Reloading.
Dec 13 07:23:39 compute-0 systemd-sysv-generator[154090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:23:39 compute-0 systemd-rc-local-generator[154086]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:23:39 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 13 07:23:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac2e3cc0f49fbd08a64ac89f3699fdf738171896df38043320a4a42d495566/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac2e3cc0f49fbd08a64ac89f3699fdf738171896df38043320a4a42d495566/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 07:23:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07.
Dec 13 07:23:39 compute-0 podman[154103]: 2025-12-13 07:23:39.966152081 +0000 UTC m=+0.082438884 container init 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 07:23:39 compute-0 ovn_metadata_agent[154116]: + sudo -E kolla_set_configs
Dec 13 07:23:39 compute-0 podman[154103]: 2025-12-13 07:23:39.993206648 +0000 UTC m=+0.109493452 container start 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:23:39 compute-0 edpm-start-podman-container[154103]: ovn_metadata_agent
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Validating config file
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Copying service configuration files
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Writing out command to execute
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 13 07:23:40 compute-0 podman[154122]: 2025-12-13 07:23:40.046089895 +0000 UTC m=+0.045306809 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 07:23:40 compute-0 edpm-start-podman-container[154102]: Creating additional drop-in dependency for "ovn_metadata_agent" (1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07)
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: ++ cat /run_command
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + CMD=neutron-ovn-metadata-agent
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + ARGS=
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + sudo kolla_copy_cacerts
Dec 13 07:23:40 compute-0 systemd[1]: Reloading.
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: Running command: 'neutron-ovn-metadata-agent'
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + [[ ! -n '' ]]
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + . kolla_extend_start
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + umask 0022
Dec 13 07:23:40 compute-0 ovn_metadata_agent[154116]: + exec neutron-ovn-metadata-agent
Dec 13 07:23:40 compute-0 systemd-rc-local-generator[154181]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:23:40 compute-0 systemd-sysv-generator[154184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:23:40 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 13 07:23:40 compute-0 sudo[154028]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:40 compute-0 sshd-session[145245]: Connection closed by 192.168.122.30 port 38534
Dec 13 07:23:40 compute-0 sshd-session[145242]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:23:40 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 13 07:23:40 compute-0 systemd[1]: session-50.scope: Consumed 41.260s CPU time.
Dec 13 07:23:40 compute-0 systemd-logind[745]: Session 50 logged out. Waiting for processes to exit.
Dec 13 07:23:40 compute-0 systemd-logind[745]: Removed session 50.
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.600 154121 INFO neutron.common.config [-] Logging enabled!
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.600 154121 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.600 154121 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.601 154121 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.602 154121 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.603 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.604 154121 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.605 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.606 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.607 154121 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.608 154121 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.609 154121 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.610 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.611 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.612 154121 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ceph-mon[74928]: pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.613 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.614 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.615 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.616 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.617 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.618 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.619 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.620 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.621 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.622 154121 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.623 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.624 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.625 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.626 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.627 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.628 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.629 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.630 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.631 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.632 154121 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.639 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.640 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.650 154121 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 075cc82e-193d-47f2-a248-9917472f5475 (UUID: 075cc82e-193d-47f2-a248-9917472f5475) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.668 154121 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.670 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.676 154121 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.679 154121 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '075cc82e-193d-47f2-a248-9917472f5475'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f240d193b80>], external_ids={}, name=075cc82e-193d-47f2-a248-9917472f5475, nb_cfg_timestamp=1765610577914, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.680 154121 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f240d116fd0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.681 154121 INFO oslo_service.service [-] Starting 1 workers
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.685 154121 DEBUG oslo_service.service [-] Started child 154224 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.687 154121 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpj55srbhp/privsep.sock']
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.688 154224 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-170709'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.704 154224 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.704 154224 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.704 154224 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.706 154224 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.712 154224 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 07:23:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:41.715 154224 INFO eventlet.wsgi.server [-] (154224) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 13 07:23:42 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.211 154121 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.212 154121 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpj55srbhp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.133 154229 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.136 154229 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.138 154229 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.138 154229 INFO oslo.privsep.daemon [-] privsep daemon running as pid 154229
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.215 154229 DEBUG oslo.privsep.daemon [-] privsep: reply[732fe205-1cdd-4b5d-b004-6269471f91be]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 07:23:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.633 154229 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.633 154229 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:23:42 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:42.633 154229 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:23:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.083 154229 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e69915-e524-495e-8d2f-7aec7dd240f0]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.085 154121 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=075cc82e-193d-47f2-a248-9917472f5475, column=external_ids, values=({'neutron:ovn-metadata-id': 'ab55531c-472b-5a5c-8fef-f07849a1dd3d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.092 154121 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=075cc82e-193d-47f2-a248-9917472f5475, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.098 154121 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.099 154121 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.100 154121 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.101 154121 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.102 154121 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.103 154121 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.104 154121 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.105 154121 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.106 154121 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.107 154121 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.108 154121 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.109 154121 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.110 154121 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.111 154121 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.112 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.113 154121 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.114 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.115 154121 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.116 154121 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.117 154121 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.118 154121 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.119 154121 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.120 154121 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.121 154121 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.122 154121 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.123 154121 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.124 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.125 154121 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.126 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.127 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.128 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:23:43 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:23:43.129 154121 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 07:23:43 compute-0 ceph-mon[74928]: pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.622424) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624622488, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 867, "num_deletes": 251, "total_data_size": 1215543, "memory_usage": 1240832, "flush_reason": "Manual Compaction"}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624628092, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1193835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8823, "largest_seqno": 9689, "table_properties": {"data_size": 1189495, "index_size": 1992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9145, "raw_average_key_size": 18, "raw_value_size": 1180827, "raw_average_value_size": 2414, "num_data_blocks": 93, "num_entries": 489, "num_filter_entries": 489, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610548, "oldest_key_time": 1765610548, "file_creation_time": 1765610624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5661 microseconds, and 4526 cpu microseconds.
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628121) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1193835 bytes OK
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628133) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628478) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628488) EVENT_LOG_v1 {"time_micros": 1765610624628486, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628501) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1211287, prev total WAL file size 1211287, number of live WAL files 2.
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1165KB)], [23(6730KB)]
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624628986, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8086001, "oldest_snapshot_seqno": -1}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3292 keys, 6300978 bytes, temperature: kUnknown
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624643865, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6300978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6277029, "index_size": 14624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79784, "raw_average_key_size": 24, "raw_value_size": 6215505, "raw_average_value_size": 1888, "num_data_blocks": 639, "num_entries": 3292, "num_filter_entries": 3292, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.644005) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6300978 bytes
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.644374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 541.6 rd, 422.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.6 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(12.1) write-amplify(5.3) OK, records in: 3806, records dropped: 514 output_compression: NoCompression
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.644386) EVENT_LOG_v1 {"time_micros": 1765610624644381, "job": 8, "event": "compaction_finished", "compaction_time_micros": 14930, "compaction_time_cpu_micros": 11098, "output_level": 6, "num_output_files": 1, "total_output_size": 6300978, "num_input_records": 3806, "num_output_records": 3292, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624644726, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610624645771, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.628832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:23:44 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:23:44.645823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:23:45 compute-0 ceph-mon[74928]: pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:45 compute-0 sshd-session[154234]: Accepted publickey for zuul from 192.168.122.30 port 39060 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:23:45 compute-0 systemd-logind[745]: New session 51 of user zuul.
Dec 13 07:23:45 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 13 07:23:45 compute-0 sshd-session[154234]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:23:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:46 compute-0 python3.9[154387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:23:47 compute-0 sudo[154541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wygfhdhwfqprvwknidcupaubehpiyjkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610626.9914746-34-198335104147986/AnsiballZ_command.py'
Dec 13 07:23:47 compute-0 sudo[154541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:47 compute-0 python3.9[154543]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:23:47 compute-0 sudo[154541]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:47 compute-0 ceph-mon[74928]: pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:48 compute-0 sudo[154703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbilfaqtyzojtjvdglhxpzicxnbmjhmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610627.759339-45-241516534855302/AnsiballZ_systemd_service.py'
Dec 13 07:23:48 compute-0 sudo[154703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:23:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:23:48 compute-0 python3.9[154705]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:23:48 compute-0 systemd[1]: Reloading.
Dec 13 07:23:48 compute-0 systemd-sysv-generator[154733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:23:48 compute-0 systemd-rc-local-generator[154730]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:23:48 compute-0 sudo[154703]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:49 compute-0 python3.9[154890]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:23:49 compute-0 network[154907]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:23:49 compute-0 network[154908]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:23:49 compute-0 network[154909]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:23:49 compute-0 ceph-mon[74928]: pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:49 compute-0 podman[154915]: 2025-12-13 07:23:49.979052092 +0000 UTC m=+0.069482720 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:23:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:51 compute-0 sudo[155192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddowmsrgpefnhkelucnkpcbzvkpocegf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610631.4439745-64-173376632809824/AnsiballZ_systemd_service.py'
Dec 13 07:23:51 compute-0 ceph-mon[74928]: pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:51 compute-0 sudo[155192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:51 compute-0 python3.9[155194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:51 compute-0 sudo[155192]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:52 compute-0 sudo[155345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpxhitjqxgnocbqucylklgrfbgrkteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610631.992559-64-226923908551069/AnsiballZ_systemd_service.py'
Dec 13 07:23:52 compute-0 sudo[155345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:52 compute-0 python3.9[155347]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:52 compute-0 sudo[155345]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:52 compute-0 sudo[155498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwxvuwdetdftfizztaivhxnzqbuojfwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610632.5199344-64-226800065260824/AnsiballZ_systemd_service.py'
Dec 13 07:23:52 compute-0 sudo[155498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:52 compute-0 python3.9[155500]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:52 compute-0 sudo[155498]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:53 compute-0 sudo[155651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtykswhsftaylhztysibwwrykxhwuksy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610633.0479317-64-118863040511902/AnsiballZ_systemd_service.py'
Dec 13 07:23:53 compute-0 sudo[155651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:53 compute-0 python3.9[155653]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:53 compute-0 sudo[155651]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:53 compute-0 ceph-mon[74928]: pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:53 compute-0 sudo[155804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdktqbntuvdlxzhzostskhlfpwiikdas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610633.574592-64-155303163666305/AnsiballZ_systemd_service.py'
Dec 13 07:23:53 compute-0 sudo[155804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:54 compute-0 python3.9[155806]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:54 compute-0 sudo[155804]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:54 compute-0 sudo[155957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwzwhtcneklygezselosajlcsrxkcwjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610634.1290586-64-276348567401393/AnsiballZ_systemd_service.py'
Dec 13 07:23:54 compute-0 sudo[155957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:54 compute-0 python3.9[155959]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:54 compute-0 sudo[155957]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:54 compute-0 sudo[156110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wopwnpfkeugiplafryzwrooyleqozonz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610634.6889632-64-127953788374296/AnsiballZ_systemd_service.py'
Dec 13 07:23:54 compute-0 sudo[156110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:55 compute-0 python3.9[156112]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:23:55 compute-0 sudo[156110]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:55 compute-0 ceph-mon[74928]: pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:55 compute-0 sudo[156263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwbtogjpqlxzvfomhfdlhxrlwokvxfym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610635.385953-116-251349237417760/AnsiballZ_file.py'
Dec 13 07:23:55 compute-0 sudo[156263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:55 compute-0 python3.9[156265]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:55 compute-0 sudo[156263]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:56 compute-0 sudo[156415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notfanodwebvpdmpgolqptgczfmvrpfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610636.0458755-116-116391232613122/AnsiballZ_file.py'
Dec 13 07:23:56 compute-0 sudo[156415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:56 compute-0 python3.9[156417]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:56 compute-0 sudo[156415]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:56 compute-0 sudo[156567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mirxpsnaipcdthnepydzxtzobtojswqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610636.4860678-116-89110969580704/AnsiballZ_file.py'
Dec 13 07:23:56 compute-0 sudo[156567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:56 compute-0 python3.9[156569]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:56 compute-0 sudo[156567]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:57 compute-0 sudo[156719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqqjofxfqovikwlwbnhindxyvgbmgwar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610636.9260175-116-51590006732955/AnsiballZ_file.py'
Dec 13 07:23:57 compute-0 sudo[156719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:57 compute-0 python3.9[156721]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:57 compute-0 sudo[156719]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:57 compute-0 sudo[156871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxcbishyomsajysvtphdosprxfwttpgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610637.3734722-116-249703192841585/AnsiballZ_file.py'
Dec 13 07:23:57 compute-0 sudo[156871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:57 compute-0 ceph-mon[74928]: pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:57 compute-0 python3.9[156873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:57 compute-0 sudo[156871]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:23:57 compute-0 sudo[157023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowvetgzlwrcgzgmjoxszixoewvhzpme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610637.8116734-116-17530042798340/AnsiballZ_file.py'
Dec 13 07:23:57 compute-0 sudo[157023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:58 compute-0 python3.9[157025]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:58 compute-0 sudo[157023]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:58 compute-0 sudo[157175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asoeucuywtkvqhaekgaluabgiylziuqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610638.2503593-116-111492055463131/AnsiballZ_file.py'
Dec 13 07:23:58 compute-0 sudo[157175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:58 compute-0 python3.9[157177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:58 compute-0 sudo[157175]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:58 compute-0 sudo[157327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobnqpvywndazogplrgbckvysujjhrco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610638.7139897-166-275348576973426/AnsiballZ_file.py'
Dec 13 07:23:58 compute-0 sudo[157327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:59 compute-0 python3.9[157329]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:59 compute-0 sudo[157327]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:59 compute-0 sudo[157479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xixmtaixatcvotnnvzicdqywrhbwoosn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610639.1455286-166-78015613743877/AnsiballZ_file.py'
Dec 13 07:23:59 compute-0 sudo[157479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:59 compute-0 python3.9[157481]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:59 compute-0 sudo[157479]: pam_unix(sudo:session): session closed for user root
Dec 13 07:23:59 compute-0 ceph-mon[74928]: pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:23:59 compute-0 sudo[157631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwtophjlatqzakkdgcrutwhsgiohuhts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610639.5819764-166-245499080824672/AnsiballZ_file.py'
Dec 13 07:23:59 compute-0 sudo[157631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:23:59 compute-0 python3.9[157633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:23:59 compute-0 sudo[157631]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:00 compute-0 sudo[157783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gopvhlnzfvjwusfeqeropkbufahwjhyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610640.099342-166-56196866007724/AnsiballZ_file.py'
Dec 13 07:24:00 compute-0 sudo[157783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:00 compute-0 python3.9[157785]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:24:00 compute-0 sudo[157783]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:00 compute-0 sudo[157935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpoygckbonkpvlhgxrwrhrzbbywgabbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610640.5248182-166-23110430626219/AnsiballZ_file.py'
Dec 13 07:24:00 compute-0 sudo[157935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:00 compute-0 python3.9[157937]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:24:00 compute-0 sudo[157935]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:01 compute-0 sudo[158087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkpswdvwwvyshkfdxpvmzzbtnlaahlbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610640.9601846-166-46748387568366/AnsiballZ_file.py'
Dec 13 07:24:01 compute-0 sudo[158087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:01 compute-0 python3.9[158089]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:24:01 compute-0 sudo[158087]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:01 compute-0 sudo[158239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfzjngrbargozoqwkrcepkjclwpwfji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610641.3899994-166-25669136476756/AnsiballZ_file.py'
Dec 13 07:24:01 compute-0 sudo[158239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:01 compute-0 ceph-mon[74928]: pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:01 compute-0 python3.9[158241]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:24:01 compute-0 sudo[158239]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:02 compute-0 sudo[158391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhiwqjmepwbonzhkltgorrwcaezuyois ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610641.9217725-217-186233123127123/AnsiballZ_command.py'
Dec 13 07:24:02 compute-0 sudo[158391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:02 compute-0 python3.9[158393]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:02 compute-0 sudo[158391]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:02 compute-0 python3.9[158545]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 07:24:03 compute-0 sudo[158695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomqkslozdfauauhqdbkcibdxoxxixvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610642.9977381-235-157152370567886/AnsiballZ_systemd_service.py'
Dec 13 07:24:03 compute-0 sudo[158695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:03 compute-0 python3.9[158697]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:24:03 compute-0 systemd[1]: Reloading.
Dec 13 07:24:03 compute-0 systemd-rc-local-generator[158718]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:24:03 compute-0 systemd-sysv-generator[158724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:24:03 compute-0 ceph-mon[74928]: pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:03 compute-0 sudo[158695]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:03 compute-0 sudo[158882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yddpqozhvjhcbwehquvrmhbodgshfvnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610643.7924974-243-55445229388192/AnsiballZ_command.py'
Dec 13 07:24:03 compute-0 sudo[158882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:04 compute-0 python3.9[158884]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:04 compute-0 sudo[158882]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:04 compute-0 sudo[159035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjdhpqsyobqpmwzkcvfiyjorxacrqhfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610644.2492557-243-196112220945240/AnsiballZ_command.py'
Dec 13 07:24:04 compute-0 sudo[159035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:04 compute-0 python3.9[159037]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:04 compute-0 sudo[159035]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:05 compute-0 sudo[159188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxwpcmcvgbokhugvsirfhgztodlrrdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610644.8270845-243-183225749735505/AnsiballZ_command.py'
Dec 13 07:24:05 compute-0 sudo[159188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:05 compute-0 python3.9[159190]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:05 compute-0 sudo[159188]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:05 compute-0 sudo[159341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyuhnqbdkyjvwvsmnrldglzdttfhirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610645.25857-243-149193266685952/AnsiballZ_command.py'
Dec 13 07:24:05 compute-0 sudo[159341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:05 compute-0 python3.9[159343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:05 compute-0 sudo[159341]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:05 compute-0 ceph-mon[74928]: pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:05 compute-0 sudo[159494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmdkkugpkskgmimjbhggppynpydjbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610645.6891706-243-226967545705173/AnsiballZ_command.py'
Dec 13 07:24:05 compute-0 sudo[159494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:06 compute-0 python3.9[159496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:06 compute-0 sudo[159494]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:06 compute-0 sudo[159647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmbgzurbthhlltvtfjjffujnmagdjbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610646.1078076-243-59846318010127/AnsiballZ_command.py'
Dec 13 07:24:06 compute-0 sudo[159647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:06 compute-0 python3.9[159649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:06 compute-0 sudo[159647]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:06 compute-0 sudo[159800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vutrisjwwozurnhdagmaxsryhxyblreg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610646.5396287-243-79620878202250/AnsiballZ_command.py'
Dec 13 07:24:06 compute-0 sudo[159800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:07 compute-0 python3.9[159802]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:24:07 compute-0 sudo[159800]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:07 compute-0 ceph-mon[74928]: pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:07 compute-0 sudo[159953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzzzbgvxqsncxpkfwwolcgqdvtrnvgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610647.6140323-297-215471606796060/AnsiballZ_getent.py'
Dec 13 07:24:07 compute-0 sudo[159953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:08 compute-0 python3.9[159955]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 13 07:24:08 compute-0 sudo[159953]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:08 compute-0 sudo[160106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meqcsnpyyyedtvnncsirbvchutbypkzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610648.198999-305-143329470190438/AnsiballZ_group.py'
Dec 13 07:24:08 compute-0 sudo[160106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:08 compute-0 python3.9[160108]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 07:24:08 compute-0 groupadd[160109]: group added to /etc/group: name=libvirt, GID=42473
Dec 13 07:24:08 compute-0 groupadd[160109]: group added to /etc/gshadow: name=libvirt
Dec 13 07:24:08 compute-0 groupadd[160109]: new group: name=libvirt, GID=42473
Dec 13 07:24:08 compute-0 ceph-mon[74928]: pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:08 compute-0 sudo[160106]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:24:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:24:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:24:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:24:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:24:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:24:09 compute-0 sudo[160264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwoxgckpeakfbbwusrzyabpacklufita ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610648.8167963-313-81040112417213/AnsiballZ_user.py'
Dec 13 07:24:09 compute-0 sudo[160264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:09 compute-0 python3.9[160266]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 07:24:09 compute-0 useradd[160268]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 13 07:24:09 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:24:09 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:24:09 compute-0 sudo[160264]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:09 compute-0 sudo[160425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krocekhaleisswxwcguswroyvsmcgwkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610649.6845067-324-143476338769370/AnsiballZ_setup.py'
Dec 13 07:24:09 compute-0 sudo[160425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:10 compute-0 python3.9[160427]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:24:10 compute-0 sudo[160425]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:10 compute-0 podman[160436]: 2025-12-13 07:24:10.386142908 +0000 UTC m=+0.042638768 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:24:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:10 compute-0 sudo[160525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwlviwyeyokiipgschrziqasoabchztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610649.6845067-324-143476338769370/AnsiballZ_dnf.py'
Dec 13 07:24:10 compute-0 sudo[160525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:24:10 compute-0 python3.9[160527]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:24:11 compute-0 ceph-mon[74928]: pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:24:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5491 writes, 855 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s
                                           Interval WAL: 5491 writes, 855 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:24:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:13 compute-0 ceph-mon[74928]: pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:15 compute-0 ceph-mon[74928]: pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:24:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s
                                           Interval WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:24:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:17 compute-0 ceph-mon[74928]: pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:24:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5589 writes, 24K keys, 5589 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5589 writes, 841 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5586 writes, 24K keys, 5586 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s
                                           Interval WAL: 5587 writes, 841 syncs, 6.64 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:24:19 compute-0 ceph-mon[74928]: pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:20 compute-0 podman[160538]: 2025-12-13 07:24:20.719800106 +0000 UTC m=+0.057952317 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 07:24:21 compute-0 ceph-mon[74928]: pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:23 compute-0 ceph-mgr[75200]: [devicehealth INFO root] Check health
Dec 13 07:24:23 compute-0 ceph-mon[74928]: pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:25 compute-0 ceph-mon[74928]: pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:27 compute-0 ceph-mon[74928]: pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:29 compute-0 ceph-mon[74928]: pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:31 compute-0 ceph-mon[74928]: pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:33 compute-0 ceph-mon[74928]: pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:35 compute-0 ceph-mon[74928]: pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:37 compute-0 ceph-mon[74928]: pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:24:38
Dec 13 07:24:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:24:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:24:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', 'images']
Dec 13 07:24:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:24:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:24:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:24:39 compute-0 sudo[160567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:24:39 compute-0 sudo[160567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:39 compute-0 sudo[160567]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:39 compute-0 ceph-mon[74928]: pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:39 compute-0 sudo[160592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 07:24:39 compute-0 sudo[160592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:39 compute-0 sudo[160592]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:24:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:24:39 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:39 compute-0 sudo[160635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:24:39 compute-0 sudo[160635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:39 compute-0 sudo[160635]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:39 compute-0 sudo[160660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:24:39 compute-0 sudo[160660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:40 compute-0 sudo[160660]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:24:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:24:40 compute-0 sudo[160724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:24:40 compute-0 sudo[160724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:40 compute-0 sudo[160724]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:40 compute-0 sudo[160751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:24:40 compute-0 sudo[160751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.549033428 +0000 UTC m=+0.028732746 container create ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:40 compute-0 systemd[1]: Started libpod-conmon-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope.
Dec 13 07:24:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.60152807 +0000 UTC m=+0.081227398 container init ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.606863003 +0000 UTC m=+0.086562331 container start ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.610097415 +0000 UTC m=+0.089796754 container attach ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:40 compute-0 recursing_sinoussi[160815]: 167 167
Dec 13 07:24:40 compute-0 systemd[1]: libpod-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope: Deactivated successfully.
Dec 13 07:24:40 compute-0 conmon[160815]: conmon ccf9988921e2d101b3c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope/container/memory.events
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.612653412 +0000 UTC m=+0.092352730 container died ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d3fbd7640254a7d12e1166ddaa79199f53e4979310478794f2e82f936ccebe-merged.mount: Deactivated successfully.
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.536929645 +0000 UTC m=+0.016628983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:24:40 compute-0 podman[160798]: 2025-12-13 07:24:40.639611921 +0000 UTC m=+0.119311239 container remove ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec 13 07:24:40 compute-0 podman[160811]: 2025-12-13 07:24:40.644983613 +0000 UTC m=+0.072304217 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 13 07:24:40 compute-0 systemd[1]: libpod-conmon-ccf9988921e2d101b3c05eea92e4f4316b35b96565262dee32389ca23c134978.scope: Deactivated successfully.
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:24:40 compute-0 ceph-mon[74928]: pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:40 compute-0 podman[160861]: 2025-12-13 07:24:40.759985237 +0000 UTC m=+0.029221746 container create d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:24:40 compute-0 systemd[1]: Started libpod-conmon-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope.
Dec 13 07:24:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:40 compute-0 podman[160861]: 2025-12-13 07:24:40.823989985 +0000 UTC m=+0.093226494 container init d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:24:40 compute-0 podman[160861]: 2025-12-13 07:24:40.829104253 +0000 UTC m=+0.098340752 container start d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:24:40 compute-0 podman[160861]: 2025-12-13 07:24:40.830877878 +0000 UTC m=+0.100114378 container attach d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:24:40 compute-0 podman[160861]: 2025-12-13 07:24:40.747613269 +0000 UTC m=+0.016849789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:24:41 compute-0 hungry_merkle[160878]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:24:41 compute-0 hungry_merkle[160878]: --> All data devices are unavailable
Dec 13 07:24:41 compute-0 systemd[1]: libpod-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope: Deactivated successfully.
Dec 13 07:24:41 compute-0 conmon[160878]: conmon d1f8b2efe5636439f3a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope/container/memory.events
Dec 13 07:24:41 compute-0 podman[160861]: 2025-12-13 07:24:41.213675633 +0000 UTC m=+0.482912153 container died d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 07:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-68ab03953ee4e6fc0b4e25097479849fc22e35f072aa772767e152b2518b9632-merged.mount: Deactivated successfully.
Dec 13 07:24:41 compute-0 podman[160861]: 2025-12-13 07:24:41.236980882 +0000 UTC m=+0.506217381 container remove d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:41 compute-0 systemd[1]: libpod-conmon-d1f8b2efe5636439f3a8ae392045e224e5d8febe9a93db5b505ac9fc2e6551ff.scope: Deactivated successfully.
Dec 13 07:24:41 compute-0 sudo[160751]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:41 compute-0 sudo[160932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:24:41 compute-0 sudo[160932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:41 compute-0 sudo[160932]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:41 compute-0 sudo[160959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:24:41 compute-0 sudo[160959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.565030656 +0000 UTC m=+0.025103462 container create 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:24:41 compute-0 systemd[1]: Started libpod-conmon-10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51.scope.
Dec 13 07:24:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.609386032 +0000 UTC m=+0.069458857 container init 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.613976484 +0000 UTC m=+0.074049289 container start 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.615113943 +0000 UTC m=+0.075186779 container attach 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:41 compute-0 laughing_dijkstra[161024]: 167 167
Dec 13 07:24:41 compute-0 systemd[1]: libpod-10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51.scope: Deactivated successfully.
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.617177695 +0000 UTC m=+0.077250500 container died 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1939bec3fa9b04377de7f827a95ce4d85c977016c3aa6b180645a03e4f7b1f43-merged.mount: Deactivated successfully.
Dec 13 07:24:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:24:41.634 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:24:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:24:41.634 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:24:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:24:41.634 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.636803773 +0000 UTC m=+0.096876579 container remove 10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 07:24:41 compute-0 podman[161008]: 2025-12-13 07:24:41.554974345 +0000 UTC m=+0.015047170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:24:41 compute-0 systemd[1]: libpod-conmon-10d1dfdfcd37c053ce007c50265acd41851c0b69a0c404a6fb26b4e9443d4d51.scope: Deactivated successfully.
Dec 13 07:24:41 compute-0 podman[161054]: 2025-12-13 07:24:41.762204654 +0000 UTC m=+0.029868392 container create 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:24:41 compute-0 systemd[1]: Started libpod-conmon-7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e.scope.
Dec 13 07:24:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:41 compute-0 podman[161054]: 2025-12-13 07:24:41.832109268 +0000 UTC m=+0.099773016 container init 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:24:41 compute-0 podman[161054]: 2025-12-13 07:24:41.837322101 +0000 UTC m=+0.104985839 container start 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:24:41 compute-0 podman[161054]: 2025-12-13 07:24:41.841183773 +0000 UTC m=+0.108847532 container attach 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:24:41 compute-0 podman[161054]: 2025-12-13 07:24:41.750819433 +0000 UTC m=+0.018483181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]: {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:     "0": [
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:         {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "devices": [
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "/dev/loop3"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             ],
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_name": "ceph_lv0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_size": "21470642176",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "name": "ceph_lv0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "tags": {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cluster_name": "ceph",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.crush_device_class": "",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.encrypted": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.objectstore": "bluestore",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osd_id": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.type": "block",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.vdo": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.with_tpm": "0"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             },
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "type": "block",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "vg_name": "ceph_vg0"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:         }
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:     ],
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:     "1": [
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:         {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "devices": [
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "/dev/loop4"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             ],
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_name": "ceph_lv1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_size": "21470642176",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "name": "ceph_lv1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "tags": {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cluster_name": "ceph",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.crush_device_class": "",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.encrypted": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.objectstore": "bluestore",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osd_id": "1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.type": "block",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.vdo": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.with_tpm": "0"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             },
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "type": "block",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "vg_name": "ceph_vg1"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:         }
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:     ],
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:     "2": [
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:         {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "devices": [
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "/dev/loop5"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             ],
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_name": "ceph_lv2",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_size": "21470642176",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "name": "ceph_lv2",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "tags": {
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.cluster_name": "ceph",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.crush_device_class": "",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.encrypted": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.objectstore": "bluestore",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osd_id": "2",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.type": "block",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.vdo": "0",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:                 "ceph.with_tpm": "0"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             },
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "type": "block",
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:             "vg_name": "ceph_vg2"
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:         }
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]:     ]
Dec 13 07:24:42 compute-0 frosty_elbakyan[161071]: }
Dec 13 07:24:42 compute-0 systemd[1]: libpod-7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e.scope: Deactivated successfully.
Dec 13 07:24:42 compute-0 podman[161054]: 2025-12-13 07:24:42.078719797 +0000 UTC m=+0.346383536 container died 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82b72cc039be561759e1b6036f2b7435892212e23ed7fb6229a19e29b6ba784-merged.mount: Deactivated successfully.
Dec 13 07:24:42 compute-0 podman[161054]: 2025-12-13 07:24:42.106809092 +0000 UTC m=+0.374472820 container remove 7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:24:42 compute-0 systemd[1]: libpod-conmon-7af32cd8d3fd92e1270368a2dc0182a115b1c6b5aa24d4c4b14363cadcb9a23e.scope: Deactivated successfully.
Dec 13 07:24:42 compute-0 sudo[160959]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:42 compute-0 sudo[161108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:24:42 compute-0 sudo[161108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:42 compute-0 sudo[161108]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:42 compute-0 sudo[161135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:24:42 compute-0 sudo[161135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.442428753 +0000 UTC m=+0.030909590 container create b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:42 compute-0 systemd[1]: Started libpod-conmon-b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10.scope.
Dec 13 07:24:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.487797313 +0000 UTC m=+0.076278170 container init b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.492349055 +0000 UTC m=+0.080829891 container start b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.493384872 +0000 UTC m=+0.081865709 container attach b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:24:42 compute-0 interesting_black[161198]: 167 167
Dec 13 07:24:42 compute-0 systemd[1]: libpod-b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10.scope: Deactivated successfully.
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.496048772 +0000 UTC m=+0.084529609 container died b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.513344769 +0000 UTC m=+0.101825606 container remove b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_black, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:24:42 compute-0 podman[161182]: 2025-12-13 07:24:42.430741475 +0000 UTC m=+0.019222322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:24:42 compute-0 systemd[1]: libpod-conmon-b912114ed4d0064a002d0efc1ea435ff201e8f10784d0d35b8fefc4625956f10.scope: Deactivated successfully.
Dec 13 07:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a4a0bc1f57a3d47046154ac493fc9b15538ae795514448716ea205fb0fa086f-merged.mount: Deactivated successfully.
Dec 13 07:24:42 compute-0 podman[161228]: 2025-12-13 07:24:42.63405062 +0000 UTC m=+0.029849976 container create bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:24:42 compute-0 systemd[1]: Started libpod-conmon-bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a.scope.
Dec 13 07:24:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:24:42 compute-0 podman[161228]: 2025-12-13 07:24:42.693635647 +0000 UTC m=+0.089435023 container init bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 07:24:42 compute-0 podman[161228]: 2025-12-13 07:24:42.698182477 +0000 UTC m=+0.093981834 container start bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:42 compute-0 podman[161228]: 2025-12-13 07:24:42.69935418 +0000 UTC m=+0.095153538 container attach bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:42 compute-0 podman[161228]: 2025-12-13 07:24:42.623055432 +0000 UTC m=+0.018854809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:24:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:43 compute-0 lvm[161354]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:24:43 compute-0 lvm[161355]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:24:43 compute-0 lvm[161354]: VG ceph_vg0 finished
Dec 13 07:24:43 compute-0 lvm[161355]: VG ceph_vg1 finished
Dec 13 07:24:43 compute-0 lvm[161358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:24:43 compute-0 lvm[161358]: VG ceph_vg2 finished
Dec 13 07:24:43 compute-0 stupefied_villani[161245]: {}
Dec 13 07:24:43 compute-0 lvm[161361]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:24:43 compute-0 lvm[161361]: VG ceph_vg2 finished
Dec 13 07:24:43 compute-0 systemd[1]: libpod-bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a.scope: Deactivated successfully.
Dec 13 07:24:43 compute-0 podman[161228]: 2025-12-13 07:24:43.262230063 +0000 UTC m=+0.658029419 container died bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 07:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9c805bb5b7b45fc738300f626babcee026c2c35f2137a08eb333593d82e9848-merged.mount: Deactivated successfully.
Dec 13 07:24:43 compute-0 podman[161228]: 2025-12-13 07:24:43.285766697 +0000 UTC m=+0.681566054 container remove bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:24:43 compute-0 systemd[1]: libpod-conmon-bce2f922c025c277dd0416eb91159bf3598321c9edff74c6c646570b0c5c342a.scope: Deactivated successfully.
Dec 13 07:24:43 compute-0 sudo[161135]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:24:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:43 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:24:43 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:43 compute-0 sudo[161372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:24:43 compute-0 sudo[161372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:24:43 compute-0 sudo[161372]: pam_unix(sudo:session): session closed for user root
Dec 13 07:24:43 compute-0 ceph-mon[74928]: pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:43 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:43 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:24:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:45 compute-0 ceph-mon[74928]: pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:47 compute-0 ceph-mon[74928]: pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:24:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:24:49 compute-0 ceph-mon[74928]: pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:51 compute-0 ceph-mon[74928]: pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:51 compute-0 podman[161404]: 2025-12-13 07:24:51.720979443 +0000 UTC m=+0.061115445 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true)
Dec 13 07:24:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:53 compute-0 ceph-mon[74928]: pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:54 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 07:24:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 07:24:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:55 compute-0 ceph-mon[74928]: pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:57 compute-0 ceph-mon[74928]: pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:24:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:24:59 compute-0 ceph-mon[74928]: pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:01 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 07:25:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 07:25:01 compute-0 ceph-mon[74928]: pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:03 compute-0 ceph-mon[74928]: pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:05 compute-0 ceph-mon[74928]: pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:07 compute-0 ceph-mon[74928]: pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 13 07:25:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:25:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:25:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:25:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:25:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:25:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:25:09 compute-0 ceph-mon[74928]: pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec 13 07:25:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 07:25:11 compute-0 ceph-mon[74928]: pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 07:25:11 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 13 07:25:11 compute-0 podman[162491]: 2025-12-13 07:25:11.700597103 +0000 UTC m=+0.040640806 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 07:25:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:13 compute-0 ceph-mon[74928]: pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:15 compute-0 ceph-mon[74928]: pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:17 compute-0 ceph-mon[74928]: pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:19 compute-0 ceph-mon[74928]: pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:25:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 13 07:25:21 compute-0 ceph-mon[74928]: pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec 13 07:25:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 07:25:22 compute-0 podman[173232]: 2025-12-13 07:25:22.711611001 +0000 UTC m=+0.056744391 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:25:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:23 compute-0 ceph-mon[74928]: pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 07:25:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:25 compute-0 ceph-mon[74928]: pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:27 compute-0 ceph-mon[74928]: pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:29 compute-0 ceph-mon[74928]: pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:31 compute-0 ceph-mon[74928]: pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:33 compute-0 ceph-mon[74928]: pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:35 compute-0 ceph-mon[74928]: pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:37 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 07:25:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 07:25:37 compute-0 ceph-mon[74928]: pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:37 compute-0 groupadd[178309]: group added to /etc/group: name=dnsmasq, GID=991
Dec 13 07:25:37 compute-0 groupadd[178309]: group added to /etc/gshadow: name=dnsmasq
Dec 13 07:25:37 compute-0 groupadd[178309]: new group: name=dnsmasq, GID=991
Dec 13 07:25:37 compute-0 useradd[178316]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 13 07:25:37 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:25:37 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 13 07:25:37 compute-0 dbus-broker-launch[727]: Noticed file-system modification, trigger reload.
Dec 13 07:25:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:25:38
Dec 13 07:25:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:25:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:25:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Dec 13 07:25:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:25:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:38 compute-0 groupadd[178329]: group added to /etc/group: name=clevis, GID=990
Dec 13 07:25:38 compute-0 groupadd[178329]: group added to /etc/gshadow: name=clevis
Dec 13 07:25:38 compute-0 groupadd[178329]: new group: name=clevis, GID=990
Dec 13 07:25:38 compute-0 useradd[178336]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 13 07:25:38 compute-0 usermod[178346]: add 'clevis' to group 'tss'
Dec 13 07:25:38 compute-0 usermod[178346]: add 'clevis' to shadow group 'tss'
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:25:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:25:39 compute-0 ceph-mon[74928]: pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:40 compute-0 polkitd[43388]: Reloading rules
Dec 13 07:25:40 compute-0 polkitd[43388]: Collecting garbage unconditionally...
Dec 13 07:25:40 compute-0 polkitd[43388]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 07:25:40 compute-0 polkitd[43388]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 07:25:40 compute-0 polkitd[43388]: Finished loading, compiling and executing 3 rules
Dec 13 07:25:40 compute-0 polkitd[43388]: Reloading rules
Dec 13 07:25:40 compute-0 polkitd[43388]: Collecting garbage unconditionally...
Dec 13 07:25:40 compute-0 polkitd[43388]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 07:25:40 compute-0 polkitd[43388]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 07:25:40 compute-0 polkitd[43388]: Finished loading, compiling and executing 3 rules
Dec 13 07:25:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:25:41.635 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:25:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:25:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:25:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:25:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:25:41 compute-0 ceph-mon[74928]: pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:42 compute-0 podman[178533]: 2025-12-13 07:25:42.735055475 +0000 UTC m=+0.067655744 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 07:25:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:43 compute-0 sudo[179154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:25:43 compute-0 sudo[179154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:43 compute-0 sudo[179154]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:43 compute-0 sudo[179188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:25:43 compute-0 sudo[179188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:43 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 13 07:25:43 compute-0 sshd[963]: Received signal 15; terminating.
Dec 13 07:25:43 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 13 07:25:43 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 13 07:25:43 compute-0 systemd[1]: sshd.service: Consumed 1.500s CPU time, read 32.0K from disk, written 0B to disk.
Dec 13 07:25:43 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 13 07:25:43 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 13 07:25:43 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 07:25:43 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 07:25:43 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 07:25:43 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 13 07:25:43 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 13 07:25:43 compute-0 sshd[179217]: Server listening on 0.0.0.0 port 22.
Dec 13 07:25:43 compute-0 sshd[179217]: Server listening on :: port 22.
Dec 13 07:25:43 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 13 07:25:43 compute-0 ceph-mon[74928]: pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:43 compute-0 podman[179294]: 2025-12-13 07:25:43.816130989 +0000 UTC m=+0.052542562 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:25:43 compute-0 podman[179294]: 2025-12-13 07:25:43.918741518 +0000 UTC m=+0.155153091 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:25:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:44 compute-0 sudo[179188]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:25:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:44 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:25:44 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:44 compute-0 sudo[179582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:25:44 compute-0 sudo[179582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:44 compute-0 sudo[179582]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:44 compute-0 sudo[179613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:25:44 compute-0 sudo[179613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:25:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:25:44 compute-0 systemd[1]: Reloading.
Dec 13 07:25:45 compute-0 sudo[179613]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:45 compute-0 systemd-rc-local-generator[179732]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:45 compute-0 systemd-sysv-generator[179739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:25:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:25:45 compute-0 sudo[179750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:25:45 compute-0 sudo[179750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:45 compute-0 sudo[179750]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:45 compute-0 sudo[179977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:25:45 compute-0 sudo[179977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:45 compute-0 ceph-mon[74928]: pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:25:45 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:25:45 compute-0 podman[180356]: 2025-12-13 07:25:45.48059038 +0000 UTC m=+0.017546619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:25:45 compute-0 podman[180356]: 2025-12-13 07:25:45.953242041 +0000 UTC m=+0.490198259 container create 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:25:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:46 compute-0 systemd[1]: Started libpod-conmon-18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3.scope.
Dec 13 07:25:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:25:46 compute-0 podman[180356]: 2025-12-13 07:25:46.696823943 +0000 UTC m=+1.233780181 container init 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:25:46 compute-0 podman[180356]: 2025-12-13 07:25:46.702516575 +0000 UTC m=+1.239472793 container start 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 07:25:46 compute-0 podman[180356]: 2025-12-13 07:25:46.706489975 +0000 UTC m=+1.243446203 container attach 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:25:46 compute-0 great_hermann[182085]: 167 167
Dec 13 07:25:46 compute-0 systemd[1]: libpod-18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3.scope: Deactivated successfully.
Dec 13 07:25:46 compute-0 podman[180356]: 2025-12-13 07:25:46.707137853 +0000 UTC m=+1.244094071 container died 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2950c24b9aa824c4417ddd99cae21bbda037f5125db89e2cb4dfc35edbe15c74-merged.mount: Deactivated successfully.
Dec 13 07:25:46 compute-0 podman[180356]: 2025-12-13 07:25:46.735854631 +0000 UTC m=+1.272810850 container remove 18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hermann, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 07:25:46 compute-0 systemd[1]: libpod-conmon-18d1fb2a53527d6eab8b6a67684dd5b28f750d03147f93d351ae6ac68b91ebb3.scope: Deactivated successfully.
Dec 13 07:25:46 compute-0 podman[182765]: 2025-12-13 07:25:46.862278609 +0000 UTC m=+0.032902549 container create 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:25:46 compute-0 systemd[1]: Started libpod-conmon-2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366.scope.
Dec 13 07:25:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:46 compute-0 podman[182765]: 2025-12-13 07:25:46.926378218 +0000 UTC m=+0.097002167 container init 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:46 compute-0 podman[182765]: 2025-12-13 07:25:46.932822302 +0000 UTC m=+0.103446232 container start 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:25:46 compute-0 podman[182765]: 2025-12-13 07:25:46.934263621 +0000 UTC m=+0.104887551 container attach 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:25:46 compute-0 podman[182765]: 2025-12-13 07:25:46.847339073 +0000 UTC m=+0.017963021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:25:47 compute-0 sudo[160525]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:47 compute-0 inspiring_mirzakhani[182882]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:25:47 compute-0 inspiring_mirzakhani[182882]: --> All data devices are unavailable
Dec 13 07:25:47 compute-0 systemd[1]: libpod-2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366.scope: Deactivated successfully.
Dec 13 07:25:47 compute-0 podman[182765]: 2025-12-13 07:25:47.287166734 +0000 UTC m=+0.457790663 container died 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-128141ae9eca5d3fa24835d42bed329c14dae2cbac94b4a9cc76275e1e03d40f-merged.mount: Deactivated successfully.
Dec 13 07:25:47 compute-0 podman[182765]: 2025-12-13 07:25:47.313235934 +0000 UTC m=+0.483859863 container remove 2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:47 compute-0 systemd[1]: libpod-conmon-2c6dba67e68863417fb7acf8206c088fb2216780fdd017bf68b1cc3088038366.scope: Deactivated successfully.
Dec 13 07:25:47 compute-0 sudo[179977]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:47 compute-0 sudo[183766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:25:47 compute-0 sudo[183766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:47 compute-0 sudo[183766]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:47 compute-0 sudo[183856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:25:47 compute-0 sudo[183856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:47 compute-0 ceph-mon[74928]: pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.665886671 +0000 UTC m=+0.034038443 container create 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:25:47 compute-0 systemd[1]: Started libpod-conmon-20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63.scope.
Dec 13 07:25:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.72358239 +0000 UTC m=+0.091734183 container init 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.728591918 +0000 UTC m=+0.096743680 container start 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.730154405 +0000 UTC m=+0.098306178 container attach 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:25:47 compute-0 charming_elbakyan[184325]: 167 167
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.732489386 +0000 UTC m=+0.100641157 container died 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:25:47 compute-0 systemd[1]: libpod-20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63.scope: Deactivated successfully.
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.651702274 +0000 UTC m=+0.019854047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac67443d899512b6803ded8429a2d1069aa42bcdbc98a21a6c3764dafa0dc8b-merged.mount: Deactivated successfully.
Dec 13 07:25:47 compute-0 podman[184229]: 2025-12-13 07:25:47.761976823 +0000 UTC m=+0.130128594 container remove 20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:25:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:47 compute-0 systemd[1]: libpod-conmon-20a4e358ec30b91561e5f2c7ed56d64ab2e65100dcb490a15a43c7c2782dba63.scope: Deactivated successfully.
Dec 13 07:25:47 compute-0 sudo[184573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqbqqnrwrrmxcpfqzsttprcqeidyaaot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610747.2986503-336-14620475753727/AnsiballZ_systemd.py'
Dec 13 07:25:47 compute-0 sudo[184573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:47 compute-0 podman[184606]: 2025-12-13 07:25:47.890981942 +0000 UTC m=+0.031360859 container create c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:25:47 compute-0 systemd[1]: Started libpod-conmon-c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a.scope.
Dec 13 07:25:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:47 compute-0 podman[184606]: 2025-12-13 07:25:47.946264705 +0000 UTC m=+0.086643632 container init c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:25:47 compute-0 podman[184606]: 2025-12-13 07:25:47.954186218 +0000 UTC m=+0.094565133 container start c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:47 compute-0 podman[184606]: 2025-12-13 07:25:47.962455344 +0000 UTC m=+0.102834270 container attach c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 07:25:47 compute-0 podman[184606]: 2025-12-13 07:25:47.876051313 +0000 UTC m=+0.016430229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:25:48 compute-0 python3.9[184595]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:25:48 compute-0 systemd[1]: Reloading.
Dec 13 07:25:48 compute-0 elated_williams[184707]: {
Dec 13 07:25:48 compute-0 elated_williams[184707]:     "0": [
Dec 13 07:25:48 compute-0 elated_williams[184707]:         {
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "devices": [
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "/dev/loop3"
Dec 13 07:25:48 compute-0 elated_williams[184707]:             ],
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_name": "ceph_lv0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_size": "21470642176",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "name": "ceph_lv0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "tags": {
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cluster_name": "ceph",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.crush_device_class": "",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.encrypted": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.objectstore": "bluestore",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osd_id": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.type": "block",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.vdo": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.with_tpm": "0"
Dec 13 07:25:48 compute-0 elated_williams[184707]:             },
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "type": "block",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "vg_name": "ceph_vg0"
Dec 13 07:25:48 compute-0 elated_williams[184707]:         }
Dec 13 07:25:48 compute-0 elated_williams[184707]:     ],
Dec 13 07:25:48 compute-0 elated_williams[184707]:     "1": [
Dec 13 07:25:48 compute-0 elated_williams[184707]:         {
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "devices": [
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "/dev/loop4"
Dec 13 07:25:48 compute-0 elated_williams[184707]:             ],
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_name": "ceph_lv1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_size": "21470642176",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "name": "ceph_lv1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "tags": {
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cluster_name": "ceph",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.crush_device_class": "",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.encrypted": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.objectstore": "bluestore",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osd_id": "1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.type": "block",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.vdo": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.with_tpm": "0"
Dec 13 07:25:48 compute-0 elated_williams[184707]:             },
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "type": "block",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "vg_name": "ceph_vg1"
Dec 13 07:25:48 compute-0 elated_williams[184707]:         }
Dec 13 07:25:48 compute-0 elated_williams[184707]:     ],
Dec 13 07:25:48 compute-0 elated_williams[184707]:     "2": [
Dec 13 07:25:48 compute-0 elated_williams[184707]:         {
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "devices": [
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "/dev/loop5"
Dec 13 07:25:48 compute-0 elated_williams[184707]:             ],
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_name": "ceph_lv2",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_size": "21470642176",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "name": "ceph_lv2",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "tags": {
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.cluster_name": "ceph",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.crush_device_class": "",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.encrypted": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.objectstore": "bluestore",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osd_id": "2",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.type": "block",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.vdo": "0",
Dec 13 07:25:48 compute-0 elated_williams[184707]:                 "ceph.with_tpm": "0"
Dec 13 07:25:48 compute-0 elated_williams[184707]:             },
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "type": "block",
Dec 13 07:25:48 compute-0 elated_williams[184707]:             "vg_name": "ceph_vg2"
Dec 13 07:25:48 compute-0 elated_williams[184707]:         }
Dec 13 07:25:48 compute-0 elated_williams[184707]:     ]
Dec 13 07:25:48 compute-0 elated_williams[184707]: }
Dec 13 07:25:48 compute-0 podman[184606]: 2025-12-13 07:25:48.180843867 +0000 UTC m=+0.321222784 container died c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:25:48 compute-0 systemd-rc-local-generator[185172]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:48 compute-0 systemd-sysv-generator[185176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:48 compute-0 systemd[1]: libpod-c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a.scope: Deactivated successfully.
Dec 13 07:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac12634e7e1913ec503937c097eeebc42e188fd6d72ffecb86b0109d49013245-merged.mount: Deactivated successfully.
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:48 compute-0 podman[184606]: 2025-12-13 07:25:48.418599851 +0000 UTC m=+0.558978767 container remove c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:25:48 compute-0 systemd[1]: libpod-conmon-c9ea200225a7e19c01a368979be588395f6871766a37ecb91976c12ea053986a.scope: Deactivated successfully.
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:25:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:25:48 compute-0 sudo[183856]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:48 compute-0 sudo[184573]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:48 compute-0 sudo[185526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:25:48 compute-0 sudo[185526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:48 compute-0 sudo[185526]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:48 compute-0 sudo[185638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:25:48 compute-0 sudo[185638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.783590917 +0000 UTC m=+0.032639744 container create 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:25:48 compute-0 sudo[186155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkbnsujtwxnmvxvghphslvyblenoltft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610748.5603917-336-249281350787331/AnsiballZ_systemd.py'
Dec 13 07:25:48 compute-0 sudo[186155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:48 compute-0 systemd[1]: Started libpod-conmon-57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632.scope.
Dec 13 07:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.827420808 +0000 UTC m=+0.076469645 container init 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.832944824 +0000 UTC m=+0.081993641 container start 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.834245048 +0000 UTC m=+0.083293886 container attach 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:48 compute-0 thirsty_kalam[186204]: 167 167
Dec 13 07:25:48 compute-0 systemd[1]: libpod-57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632.scope: Deactivated successfully.
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.837138247 +0000 UTC m=+0.086187064 container died 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc6c7d82c05715dbe2f3fa8ec49b3a7f73ca67f18147cb32f57f116c5bdbc8d-merged.mount: Deactivated successfully.
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.860340308 +0000 UTC m=+0.109389125 container remove 57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:25:48 compute-0 podman[186087]: 2025-12-13 07:25:48.770752752 +0000 UTC m=+0.019801589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:25:48 compute-0 systemd[1]: libpod-conmon-57028279ec2476bf55108eb4f607c592e7819d17dee8671a03aef52509299632.scope: Deactivated successfully.
Dec 13 07:25:48 compute-0 podman[186439]: 2025-12-13 07:25:48.982132989 +0000 UTC m=+0.027604538 container create fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:25:49 compute-0 systemd[1]: Started libpod-conmon-fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630.scope.
Dec 13 07:25:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:25:49 compute-0 podman[186439]: 2025-12-13 07:25:49.039063649 +0000 UTC m=+0.084535188 container init fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:25:49 compute-0 podman[186439]: 2025-12-13 07:25:49.044377229 +0000 UTC m=+0.089848767 container start fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:25:49 compute-0 podman[186439]: 2025-12-13 07:25:49.048315633 +0000 UTC m=+0.093787172 container attach fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:25:49 compute-0 python3.9[186181]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:25:49 compute-0 podman[186439]: 2025-12-13 07:25:48.970431811 +0000 UTC m=+0.015903370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:25:49 compute-0 systemd[1]: Reloading.
Dec 13 07:25:49 compute-0 systemd-rc-local-generator[186682]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:49 compute-0 systemd-sysv-generator[186687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:49 compute-0 sudo[186155]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:49 compute-0 ceph-mon[74928]: pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:49 compute-0 lvm[187293]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:25:49 compute-0 lvm[187293]: VG ceph_vg0 finished
Dec 13 07:25:49 compute-0 lvm[187292]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:25:49 compute-0 lvm[187292]: VG ceph_vg1 finished
Dec 13 07:25:49 compute-0 thirsty_spence[186515]: {}
Dec 13 07:25:49 compute-0 lvm[187319]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:25:49 compute-0 lvm[187319]: VG ceph_vg2 finished
Dec 13 07:25:49 compute-0 systemd[1]: libpod-fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630.scope: Deactivated successfully.
Dec 13 07:25:49 compute-0 podman[186439]: 2025-12-13 07:25:49.665955603 +0000 UTC m=+0.711427132 container died fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:49 compute-0 lvm[187357]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:25:49 compute-0 lvm[187357]: VG ceph_vg2 finished
Dec 13 07:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cc19cbcea4da2fc45bef94235b623dd151c23b98016264d9d840de833c2f840-merged.mount: Deactivated successfully.
Dec 13 07:25:49 compute-0 podman[186439]: 2025-12-13 07:25:49.695194111 +0000 UTC m=+0.740665651 container remove fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:25:49 compute-0 systemd[1]: libpod-conmon-fafb32154f93bfe1de7008ca8b8b248391b205f35534befe629689db3b51d630.scope: Deactivated successfully.
Dec 13 07:25:49 compute-0 lvm[187397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:25:49 compute-0 lvm[187397]: VG ceph_vg2 finished
Dec 13 07:25:49 compute-0 sudo[185638]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:25:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:49 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:25:49 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:49 compute-0 sudo[187451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:25:49 compute-0 sudo[187451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:25:49 compute-0 sudo[187451]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:49 compute-0 sudo[187548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpyyfbxqvxbtexqygintpkuebimwlap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610749.5489-336-233915763350679/AnsiballZ_systemd.py'
Dec 13 07:25:49 compute-0 sudo[187548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:50 compute-0 python3.9[187568]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:25:50 compute-0 systemd[1]: Reloading.
Dec 13 07:25:50 compute-0 systemd-rc-local-generator[188155]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:50 compute-0 systemd-sysv-generator[188164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:50 compute-0 sudo[187548]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:50 compute-0 sudo[188971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krnuolhfjiqfnxbytmcqpjjmkqbmdzgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610750.4493887-336-116228554355969/AnsiballZ_systemd.py'
Dec 13 07:25:50 compute-0 sudo[188971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:25:50 compute-0 ceph-mon[74928]: pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:50 compute-0 python3.9[188997]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:25:50 compute-0 systemd[1]: Reloading.
Dec 13 07:25:51 compute-0 systemd-sysv-generator[189369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:51 compute-0 systemd-rc-local-generator[189366]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:25:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:25:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.786s CPU time.
Dec 13 07:25:51 compute-0 systemd[1]: run-rd85c87235c8844d2baecc0c67c88dff0.service: Deactivated successfully.
Dec 13 07:25:51 compute-0 sudo[188971]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:51 compute-0 sudo[189531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaebpujafhqhzecrmhwtbfklhtlembol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610751.3466284-365-270756144245684/AnsiballZ_systemd.py'
Dec 13 07:25:51 compute-0 sudo[189531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:51 compute-0 python3.9[189533]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:51 compute-0 systemd[1]: Reloading.
Dec 13 07:25:51 compute-0 systemd-rc-local-generator[189557]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:51 compute-0 systemd-sysv-generator[189560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:52 compute-0 sudo[189531]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:52 compute-0 sudo[189720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqurlfnsujbnuclszjtdjfiaohnbqibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610752.1824303-365-74731737119251/AnsiballZ_systemd.py'
Dec 13 07:25:52 compute-0 sudo[189720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:52 compute-0 python3.9[189722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:52 compute-0 systemd[1]: Reloading.
Dec 13 07:25:52 compute-0 systemd-sysv-generator[189752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:52 compute-0 systemd-rc-local-generator[189749]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:52 compute-0 sudo[189720]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:52 compute-0 podman[189761]: 2025-12-13 07:25:52.992267155 +0000 UTC m=+0.067931994 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 07:25:53 compute-0 sudo[189934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvkagddvvhxtiioknbfhbmqlqajacjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610753.0070324-365-162718803592025/AnsiballZ_systemd.py'
Dec 13 07:25:53 compute-0 sudo[189934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:53 compute-0 python3.9[189936]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:53 compute-0 ceph-mon[74928]: pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:53 compute-0 systemd[1]: Reloading.
Dec 13 07:25:53 compute-0 systemd-sysv-generator[189963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:53 compute-0 systemd-rc-local-generator[189960]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:53 compute-0 sudo[189934]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:54 compute-0 sudo[190124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgxjccwjbubhnopouflrmnplogfwfcme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610753.8385687-365-181720439525895/AnsiballZ_systemd.py'
Dec 13 07:25:54 compute-0 sudo[190124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:54 compute-0 python3.9[190126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:54 compute-0 sudo[190124]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:54 compute-0 sudo[190279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmqhdlcdhcjlapzgsxunggnqzdqhxkhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610754.4208794-365-27484434293501/AnsiballZ_systemd.py'
Dec 13 07:25:54 compute-0 sudo[190279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:54 compute-0 python3.9[190281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:54 compute-0 systemd[1]: Reloading.
Dec 13 07:25:54 compute-0 systemd-rc-local-generator[190306]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:54 compute-0 systemd-sysv-generator[190309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:55 compute-0 sudo[190279]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:55 compute-0 ceph-mon[74928]: pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:55 compute-0 sudo[190469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcimkxgskwyrxpijxwtywhkmfvsqmgiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610755.3039052-401-125082101143785/AnsiballZ_systemd.py'
Dec 13 07:25:55 compute-0 sudo[190469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:55 compute-0 python3.9[190471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 07:25:55 compute-0 systemd[1]: Reloading.
Dec 13 07:25:55 compute-0 systemd-rc-local-generator[190497]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:25:55 compute-0 systemd-sysv-generator[190500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:25:56 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 13 07:25:56 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 13 07:25:56 compute-0 sudo[190469]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:56 compute-0 sudo[190662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wepcpjuworhiiuttswdpyzhiicwhelwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610756.2148683-409-267414559078433/AnsiballZ_systemd.py'
Dec 13 07:25:56 compute-0 sudo[190662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:56 compute-0 python3.9[190664]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:56 compute-0 sudo[190662]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:57 compute-0 sudo[190817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qduezxqccdqwxoyptucprxxamwjsnlbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610756.79899-409-81811770159687/AnsiballZ_systemd.py'
Dec 13 07:25:57 compute-0 sudo[190817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:57 compute-0 python3.9[190819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:57 compute-0 sudo[190817]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:57 compute-0 ceph-mon[74928]: pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:57 compute-0 sudo[190972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itauantersomdrjgtdeusieqdaoyoqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610757.3882046-409-93007118952600/AnsiballZ_systemd.py'
Dec 13 07:25:57 compute-0 sudo[190972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:25:57 compute-0 python3.9[190974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:57 compute-0 sudo[190972]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:58 compute-0 sudo[191127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgxbuiwbyugttwprzcxclwpohatpxenw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610757.947349-409-39512380339838/AnsiballZ_systemd.py'
Dec 13 07:25:58 compute-0 sudo[191127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:58 compute-0 python3.9[191129]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:58 compute-0 sudo[191127]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:58 compute-0 sudo[191282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjenhahwbwoummowcsxekxujogbggfjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610758.5125873-409-226008203724698/AnsiballZ_systemd.py'
Dec 13 07:25:58 compute-0 sudo[191282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:58 compute-0 python3.9[191284]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:59 compute-0 sudo[191282]: pam_unix(sudo:session): session closed for user root
Dec 13 07:25:59 compute-0 sudo[191437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obfgjsusitotfjocwdgnyxravusycbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610759.1021185-409-112456490863357/AnsiballZ_systemd.py'
Dec 13 07:25:59 compute-0 sudo[191437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:25:59 compute-0 ceph-mon[74928]: pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:25:59 compute-0 python3.9[191439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:25:59 compute-0 sudo[191437]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:00 compute-0 sudo[191592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elwxztzysxtgqfgziptwgepotkkpphyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610759.826931-409-32909816824010/AnsiballZ_systemd.py'
Dec 13 07:26:00 compute-0 sudo[191592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:00 compute-0 python3.9[191594]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:00 compute-0 sudo[191592]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:00 compute-0 sudo[191747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxolussrortdqexlbkabsiyziehzeobe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610760.4160774-409-226186936384286/AnsiballZ_systemd.py'
Dec 13 07:26:00 compute-0 sudo[191747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:00 compute-0 python3.9[191749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:00 compute-0 sudo[191747]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:01 compute-0 sudo[191902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvuwyfxlfmtnfctfhvwohgwhykeykpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610760.992608-409-111637608417328/AnsiballZ_systemd.py'
Dec 13 07:26:01 compute-0 sudo[191902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:01 compute-0 python3.9[191904]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:01 compute-0 sudo[191902]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:01 compute-0 ceph-mon[74928]: pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:01 compute-0 sudo[192057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxshruzuajcaxgjbeggcjumgqoiquadv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610761.5770588-409-113060635039/AnsiballZ_systemd.py'
Dec 13 07:26:01 compute-0 sudo[192057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:02 compute-0 python3.9[192059]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:02 compute-0 sudo[192057]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:02 compute-0 sudo[192212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcjajpxtqfbbjqcbidhcksntgtwanjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610762.151755-409-178706558794834/AnsiballZ_systemd.py'
Dec 13 07:26:02 compute-0 sudo[192212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:02 compute-0 python3.9[192214]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:02 compute-0 sudo[192212]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:02 compute-0 sudo[192367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjwpntdmvtbxuedhithagljlfgptgnqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610762.7474804-409-45576151258471/AnsiballZ_systemd.py'
Dec 13 07:26:02 compute-0 sudo[192367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:03 compute-0 python3.9[192369]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:03 compute-0 sudo[192367]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:03 compute-0 sudo[192522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olwgulisneavgtqcauyjmlybewzqgkyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610763.312553-409-150967411496097/AnsiballZ_systemd.py'
Dec 13 07:26:03 compute-0 sudo[192522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:03 compute-0 ceph-mon[74928]: pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:03 compute-0 python3.9[192524]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:03 compute-0 sudo[192522]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:04 compute-0 sudo[192677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icpcszjkuckxbscodncmwemsezxrigcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610763.9337108-409-203621125235817/AnsiballZ_systemd.py'
Dec 13 07:26:04 compute-0 sudo[192677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:04 compute-0 python3.9[192679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 07:26:04 compute-0 sudo[192677]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:05 compute-0 sudo[192832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojyxvlegpueouzrcyekymdmfgcudluvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610764.8385608-511-173517741115981/AnsiballZ_file.py'
Dec 13 07:26:05 compute-0 sudo[192832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:05 compute-0 python3.9[192834]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:26:05 compute-0 sudo[192832]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:05 compute-0 sudo[192984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkjyojdouwhvfwcqtdplseqomltlwjaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610765.2834172-511-48665220285033/AnsiballZ_file.py'
Dec 13 07:26:05 compute-0 sudo[192984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:05 compute-0 ceph-mon[74928]: pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:05 compute-0 python3.9[192986]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:26:05 compute-0 sudo[192984]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:05 compute-0 sudo[193136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykxegggglayckguhoyjbvgkfxozhajlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610765.7416062-511-15152066919988/AnsiballZ_file.py'
Dec 13 07:26:05 compute-0 sudo[193136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:06 compute-0 python3.9[193138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:26:06 compute-0 sudo[193136]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:06 compute-0 sudo[193288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzlfyeaoitxyacrfoxlxvjdxhortbmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610766.1939604-511-64500055650814/AnsiballZ_file.py'
Dec 13 07:26:06 compute-0 sudo[193288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:06 compute-0 python3.9[193290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:26:06 compute-0 sudo[193288]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:06 compute-0 sudo[193440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roiepqujuclxkfvalffzcnvjxrpaqodu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610766.6515584-511-271335813714803/AnsiballZ_file.py'
Dec 13 07:26:06 compute-0 sudo[193440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:06 compute-0 python3.9[193442]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:26:06 compute-0 sudo[193440]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:07 compute-0 sudo[193592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sziibyqukpqiybzajwslsqmbhbywynov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610767.1041007-511-70541050674558/AnsiballZ_file.py'
Dec 13 07:26:07 compute-0 sudo[193592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:07 compute-0 python3.9[193594]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:26:07 compute-0 sudo[193592]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:07 compute-0 ceph-mon[74928]: pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:07 compute-0 sudo[193744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niguapxdesdbshsbqxatpegclehtedci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610767.5954168-554-206811968988169/AnsiballZ_stat.py'
Dec 13 07:26:07 compute-0 sudo[193744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:08 compute-0 python3.9[193746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:08 compute-0 sudo[193744]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:08 compute-0 sudo[193869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidxzdzijjmewnkavpcsjcbgetyohqaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610767.5954168-554-206811968988169/AnsiballZ_copy.py'
Dec 13 07:26:08 compute-0 sudo[193869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:08 compute-0 python3.9[193871]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610767.5954168-554-206811968988169/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:08 compute-0 sudo[193869]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:09 compute-0 sudo[194021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwqmyqvqrinpklylzwaeiivgudoxkkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610768.868036-554-154825060888130/AnsiballZ_stat.py'
Dec 13 07:26:09 compute-0 sudo[194021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:26:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:26:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:26:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:26:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:26:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:26:09 compute-0 python3.9[194023]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:09 compute-0 sudo[194021]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:09 compute-0 sudo[194146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arnhupqerismumvdqnraavwdcnshswnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610768.868036-554-154825060888130/AnsiballZ_copy.py'
Dec 13 07:26:09 compute-0 sudo[194146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:09 compute-0 ceph-mon[74928]: pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:09 compute-0 python3.9[194148]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610768.868036-554-154825060888130/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:09 compute-0 sudo[194146]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:09 compute-0 sudo[194298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbikuslhsumvwaruqiswoblwdsbcroly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610769.777508-554-237454709167106/AnsiballZ_stat.py'
Dec 13 07:26:09 compute-0 sudo[194298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:10 compute-0 python3.9[194300]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:10 compute-0 sudo[194298]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:10 compute-0 sudo[194423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbynkwmjaqtgnkvduunperzhjkqezaqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610769.777508-554-237454709167106/AnsiballZ_copy.py'
Dec 13 07:26:10 compute-0 sudo[194423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:10 compute-0 python3.9[194425]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610769.777508-554-237454709167106/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:10 compute-0 sudo[194423]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:10 compute-0 sudo[194575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlxzhfsmyzpwtfwnzomyqndobuupjklt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610770.66293-554-231400239945811/AnsiballZ_stat.py'
Dec 13 07:26:10 compute-0 sudo[194575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:11 compute-0 python3.9[194577]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:11 compute-0 sudo[194575]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:11 compute-0 sudo[194700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zapaseunsazbvstlmolwurzlyufsoghj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610770.66293-554-231400239945811/AnsiballZ_copy.py'
Dec 13 07:26:11 compute-0 sudo[194700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:11 compute-0 python3.9[194702]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610770.66293-554-231400239945811/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:11 compute-0 sudo[194700]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:11 compute-0 ceph-mon[74928]: pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:11 compute-0 sudo[194852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwirpgsldcxjuwsiflfyxsnanqdmlix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610771.5813026-554-10069716134177/AnsiballZ_stat.py'
Dec 13 07:26:11 compute-0 sudo[194852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:11 compute-0 python3.9[194854]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:11 compute-0 sudo[194852]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:12 compute-0 sudo[194977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjlordalzjqlrtcjwquwcdblidrceen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610771.5813026-554-10069716134177/AnsiballZ_copy.py'
Dec 13 07:26:12 compute-0 sudo[194977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:12 compute-0 python3.9[194979]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610771.5813026-554-10069716134177/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:12 compute-0 sudo[194977]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:12 compute-0 sudo[195129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzujhbdslgfropjfcrhufjhfxzsgfpup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610772.4290485-554-134739239786045/AnsiballZ_stat.py'
Dec 13 07:26:12 compute-0 sudo[195129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:12 compute-0 python3.9[195131]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:12 compute-0 sudo[195129]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:13 compute-0 sudo[195263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rosdpdkrqhbdhzrycvviuhznjtiyanbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610772.4290485-554-134739239786045/AnsiballZ_copy.py'
Dec 13 07:26:13 compute-0 sudo[195263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:13 compute-0 podman[195228]: 2025-12-13 07:26:13.038295697 +0000 UTC m=+0.042926403 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 07:26:13 compute-0 python3.9[195271]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610772.4290485-554-134739239786045/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:13 compute-0 sudo[195263]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:13 compute-0 sudo[195422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthtozoazvdjzciehtvjjoyzwfzjxknv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610773.288587-554-49847572697842/AnsiballZ_stat.py'
Dec 13 07:26:13 compute-0 sudo[195422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:13 compute-0 ceph-mon[74928]: pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:13 compute-0 python3.9[195424]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:13 compute-0 sudo[195422]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:13 compute-0 sudo[195545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfflsrtsmqaibfqresfrpqyoevdringg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610773.288587-554-49847572697842/AnsiballZ_copy.py'
Dec 13 07:26:13 compute-0 sudo[195545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:14 compute-0 python3.9[195547]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610773.288587-554-49847572697842/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:14 compute-0 sudo[195545]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:14 compute-0 sudo[195697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvccsmgiyvgoicpnotrjcbngpbmdoeoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610774.105453-554-125570532585574/AnsiballZ_stat.py'
Dec 13 07:26:14 compute-0 sudo[195697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:14 compute-0 python3.9[195699]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:14 compute-0 sudo[195697]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:14 compute-0 sudo[195822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwxdgrbhfptzgrwmsfjryvsqxjyfalr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610774.105453-554-125570532585574/AnsiballZ_copy.py'
Dec 13 07:26:14 compute-0 sudo[195822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:14 compute-0 python3.9[195824]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765610774.105453-554-125570532585574/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:14 compute-0 sudo[195822]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:15 compute-0 sudo[195974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqahkzzxefrisfmupyfojhranbdapfrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610774.9319077-667-80201284854445/AnsiballZ_command.py'
Dec 13 07:26:15 compute-0 sudo[195974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:15 compute-0 python3.9[195976]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 13 07:26:15 compute-0 sudo[195974]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:15 compute-0 ceph-mon[74928]: pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:15 compute-0 sudo[196127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltjxcfvzfrcnbhthctaoqltcuyvahkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610775.4026835-676-238010544385088/AnsiballZ_file.py'
Dec 13 07:26:15 compute-0 sudo[196127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:15 compute-0 python3.9[196129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:15 compute-0 sudo[196127]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:15 compute-0 sudo[196279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltwkzoazjzmqihphlqasaxgwywxrbun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610775.830204-676-174665590674615/AnsiballZ_file.py'
Dec 13 07:26:15 compute-0 sudo[196279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:16 compute-0 python3.9[196281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:16 compute-0 sudo[196279]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:16 compute-0 sudo[196431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmuwkxhcbsgmvveflwnfkpvknayuwqou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610776.2546654-676-201785507235578/AnsiballZ_file.py'
Dec 13 07:26:16 compute-0 sudo[196431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:16 compute-0 python3.9[196433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:16 compute-0 sudo[196431]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:16 compute-0 sudo[196583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmsblkmxxinaaewnifpwdkmmgrymfdti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610776.6917439-676-9006204959146/AnsiballZ_file.py'
Dec 13 07:26:16 compute-0 sudo[196583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:17 compute-0 python3.9[196585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:17 compute-0 sudo[196583]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:17 compute-0 sudo[196735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnstbjxdwazlqvdubugizmemoqryunxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610777.112781-676-43679322181464/AnsiballZ_file.py'
Dec 13 07:26:17 compute-0 sudo[196735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:17 compute-0 python3.9[196737]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:17 compute-0 sudo[196735]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:17 compute-0 ceph-mon[74928]: pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:17 compute-0 sudo[196887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjxjvlwagpshnuulypimmyzupwpttnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610777.53729-676-93417682659001/AnsiballZ_file.py'
Dec 13 07:26:17 compute-0 sudo[196887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:17 compute-0 python3.9[196889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:17 compute-0 sudo[196887]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:18 compute-0 sudo[197039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbuiyxnmultzfvratjyykmfrtmttipvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610777.9513657-676-17269392092191/AnsiballZ_file.py'
Dec 13 07:26:18 compute-0 sudo[197039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:18 compute-0 python3.9[197041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:18 compute-0 sudo[197039]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:18 compute-0 sudo[197191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrmeilwhvytwdejqckeeioilcxsdewml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610778.3667452-676-174256726714220/AnsiballZ_file.py'
Dec 13 07:26:18 compute-0 sudo[197191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:18 compute-0 python3.9[197193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:18 compute-0 sudo[197191]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:18 compute-0 sudo[197343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxofubhqmnvfxpmlxifvvfgzabipmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610778.7807343-676-99025623301177/AnsiballZ_file.py'
Dec 13 07:26:18 compute-0 sudo[197343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:19 compute-0 python3.9[197345]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:19 compute-0 sudo[197343]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:19 compute-0 sudo[197495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggrsiznqgwbdejtfeqkumnisqyiixeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610779.192436-676-115894319674884/AnsiballZ_file.py'
Dec 13 07:26:19 compute-0 sudo[197495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:19 compute-0 python3.9[197497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:19 compute-0 sudo[197495]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:19 compute-0 ceph-mon[74928]: pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:19 compute-0 sudo[197647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxsqcbfnheaaezobxdnowsqdmfptzepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610779.5989792-676-187516427486355/AnsiballZ_file.py'
Dec 13 07:26:19 compute-0 sudo[197647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:19 compute-0 python3.9[197649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:19 compute-0 sudo[197647]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:20 compute-0 sudo[197799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbbgkbcpfrhzicbjsobnwnolslccjwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610780.006811-676-38802854844477/AnsiballZ_file.py'
Dec 13 07:26:20 compute-0 sudo[197799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:20 compute-0 python3.9[197801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:20 compute-0 sudo[197799]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:20 compute-0 sudo[197951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aksjdcwakhslzctjwjqzqqmfhfxquqni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610780.4135227-676-101265945345827/AnsiballZ_file.py'
Dec 13 07:26:20 compute-0 sudo[197951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:20 compute-0 python3.9[197953]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:20 compute-0 sudo[197951]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:21 compute-0 sudo[198103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drzyetrsqiazccdvrlzfhpwztshgetme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610780.8333685-676-95747116867361/AnsiballZ_file.py'
Dec 13 07:26:21 compute-0 sudo[198103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:21 compute-0 python3.9[198105]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:21 compute-0 sudo[198103]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:21 compute-0 sudo[198255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpoqsoplrwkmwhukccknpshdvihalgmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610781.3013396-775-96218803633723/AnsiballZ_stat.py'
Dec 13 07:26:21 compute-0 sudo[198255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:21 compute-0 ceph-mon[74928]: pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:21 compute-0 python3.9[198257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:21 compute-0 sudo[198255]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:21 compute-0 sudo[198378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qluiowizrigefmbiumfebztozmvnjdee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610781.3013396-775-96218803633723/AnsiballZ_copy.py'
Dec 13 07:26:21 compute-0 sudo[198378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:22 compute-0 python3.9[198380]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610781.3013396-775-96218803633723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:22 compute-0 sudo[198378]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:22 compute-0 sudo[198530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edrbuczhsqlilhxuxhaticgkrmcsuaqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610782.1160512-775-194283496911389/AnsiballZ_stat.py'
Dec 13 07:26:22 compute-0 sudo[198530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:22 compute-0 python3.9[198532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:22 compute-0 sudo[198530]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:22 compute-0 sudo[198653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmboopczqoyztxicawdidofyqifydpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610782.1160512-775-194283496911389/AnsiballZ_copy.py'
Dec 13 07:26:22 compute-0 sudo[198653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:22 compute-0 python3.9[198655]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610782.1160512-775-194283496911389/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:22 compute-0 sudo[198653]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:23 compute-0 sudo[198819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erumcumvhnanfnihuajndkvqjkylyynk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610782.9213994-775-201541482522592/AnsiballZ_stat.py'
Dec 13 07:26:23 compute-0 sudo[198819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:23 compute-0 podman[198779]: 2025-12-13 07:26:23.156113319 +0000 UTC m=+0.065003651 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 13 07:26:23 compute-0 python3.9[198824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:23 compute-0 sudo[198819]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:23 compute-0 sudo[198951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpiedkonjwqbwxzukontjcadwmuujpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610782.9213994-775-201541482522592/AnsiballZ_copy.py'
Dec 13 07:26:23 compute-0 sudo[198951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:23 compute-0 ceph-mon[74928]: pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:23 compute-0 python3.9[198953]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610782.9213994-775-201541482522592/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:23 compute-0 sudo[198951]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:23 compute-0 sudo[199103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxrkaqrliwpdfhjmqtalfwzheltwxqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610783.773347-775-265912632889406/AnsiballZ_stat.py'
Dec 13 07:26:23 compute-0 sudo[199103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:24 compute-0 python3.9[199105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:24 compute-0 sudo[199103]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:24 compute-0 sudo[199226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlhacevzsobmnangrbdylcasyqbbqdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610783.773347-775-265912632889406/AnsiballZ_copy.py'
Dec 13 07:26:24 compute-0 sudo[199226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:24 compute-0 python3.9[199228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610783.773347-775-265912632889406/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:24 compute-0 sudo[199226]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:24 compute-0 sudo[199378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtxpmsihextdzoadzmglqhwrkaboupv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610784.6021693-775-39341613761582/AnsiballZ_stat.py'
Dec 13 07:26:24 compute-0 sudo[199378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:24 compute-0 python3.9[199380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:24 compute-0 sudo[199378]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:25 compute-0 sudo[199501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eahdxqoitkwwzdibzjkavasfdpxjauzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610784.6021693-775-39341613761582/AnsiballZ_copy.py'
Dec 13 07:26:25 compute-0 sudo[199501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:25 compute-0 python3.9[199503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610784.6021693-775-39341613761582/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:25 compute-0 sudo[199501]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:25 compute-0 ceph-mon[74928]: pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:25 compute-0 sudo[199653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxuahtykfqquzzrcigqzdlwidunhjzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610785.4682457-775-224489218045234/AnsiballZ_stat.py'
Dec 13 07:26:25 compute-0 sudo[199653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:25 compute-0 python3.9[199655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:25 compute-0 sudo[199653]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:26 compute-0 sudo[199776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijeohflzcymsylpofwmmtrzalrfxiqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610785.4682457-775-224489218045234/AnsiballZ_copy.py'
Dec 13 07:26:26 compute-0 sudo[199776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:26 compute-0 python3.9[199778]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610785.4682457-775-224489218045234/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:26 compute-0 sudo[199776]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:26 compute-0 sudo[199928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djeascnhpvbcuehgqdcecggccgonvgny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610786.3315778-775-61264009139138/AnsiballZ_stat.py'
Dec 13 07:26:26 compute-0 sudo[199928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:26 compute-0 python3.9[199930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:26 compute-0 sudo[199928]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:26 compute-0 sudo[200051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwhihxsmwhgoybxxsfnbjwkoluvnzcit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610786.3315778-775-61264009139138/AnsiballZ_copy.py'
Dec 13 07:26:26 compute-0 sudo[200051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:27 compute-0 python3.9[200053]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610786.3315778-775-61264009139138/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:27 compute-0 sudo[200051]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:27 compute-0 sudo[200203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akowkklllybtlapfpvgexiobjnztghiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610787.1600785-775-279648148565235/AnsiballZ_stat.py'
Dec 13 07:26:27 compute-0 sudo[200203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:27 compute-0 python3.9[200205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:27 compute-0 sudo[200203]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:27 compute-0 ceph-mon[74928]: pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:27 compute-0 sudo[200326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdgawovvqtqsyhdoukswdhdlrjccrfzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610787.1600785-775-279648148565235/AnsiballZ_copy.py'
Dec 13 07:26:27 compute-0 sudo[200326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:27 compute-0 python3.9[200328]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610787.1600785-775-279648148565235/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:27 compute-0 sudo[200326]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:28 compute-0 sudo[200478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijenbcutuafczsrgahkjhrokswxdyvga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610788.0168755-775-198503001401738/AnsiballZ_stat.py'
Dec 13 07:26:28 compute-0 sudo[200478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:28 compute-0 python3.9[200480]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:28 compute-0 sudo[200478]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:28 compute-0 sudo[200601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crsjlytwkyhqbiidftmbqgldbjyavlwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610788.0168755-775-198503001401738/AnsiballZ_copy.py'
Dec 13 07:26:28 compute-0 sudo[200601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:28 compute-0 python3.9[200603]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610788.0168755-775-198503001401738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:28 compute-0 sudo[200601]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:29 compute-0 sudo[200753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtzwdluynekicsuqsgneeqvcyjpbgpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610788.8450496-775-79320314756497/AnsiballZ_stat.py'
Dec 13 07:26:29 compute-0 sudo[200753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:29 compute-0 python3.9[200755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:29 compute-0 sudo[200753]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:29 compute-0 sudo[200876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xblvuxbxzesbgjfbtketsobyhpskyznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610788.8450496-775-79320314756497/AnsiballZ_copy.py'
Dec 13 07:26:29 compute-0 sudo[200876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:29 compute-0 ceph-mon[74928]: pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:29 compute-0 python3.9[200878]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610788.8450496-775-79320314756497/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:29 compute-0 sudo[200876]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:29 compute-0 sudo[201028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubunpvizdmqpawwaotfjifksqiskuwyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610789.6832333-775-203849921770375/AnsiballZ_stat.py'
Dec 13 07:26:29 compute-0 sudo[201028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:30 compute-0 python3.9[201030]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:30 compute-0 sudo[201028]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:30 compute-0 sudo[201151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyfhpgoxzdvfuxzdudfxfhbbohgpbmrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610789.6832333-775-203849921770375/AnsiballZ_copy.py'
Dec 13 07:26:30 compute-0 sudo[201151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:30 compute-0 python3.9[201153]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610789.6832333-775-203849921770375/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:30 compute-0 sudo[201151]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:30 compute-0 sudo[201303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slolwawiygfcvxwtrjjiepsbxfckdmqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610790.5229635-775-86555331229973/AnsiballZ_stat.py'
Dec 13 07:26:30 compute-0 sudo[201303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:30 compute-0 python3.9[201305]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:30 compute-0 sudo[201303]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:31 compute-0 sudo[201426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdxvktopicynucueyhpitdiilupcbytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610790.5229635-775-86555331229973/AnsiballZ_copy.py'
Dec 13 07:26:31 compute-0 sudo[201426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:31 compute-0 python3.9[201428]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610790.5229635-775-86555331229973/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:31 compute-0 sudo[201426]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:31 compute-0 sudo[201578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-updbfdcucmseimbnkcyefaaacexlfdao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610791.3495486-775-41524285152840/AnsiballZ_stat.py'
Dec 13 07:26:31 compute-0 sudo[201578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:31 compute-0 ceph-mon[74928]: pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:31 compute-0 python3.9[201580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:31 compute-0 sudo[201578]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:31 compute-0 sudo[201701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bogpukbbojzkowkzdhphznihgbgeolma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610791.3495486-775-41524285152840/AnsiballZ_copy.py'
Dec 13 07:26:31 compute-0 sudo[201701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:32 compute-0 python3.9[201703]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610791.3495486-775-41524285152840/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:32 compute-0 sudo[201701]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:32 compute-0 sudo[201853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdhsjicgbiaiejlujvxpjwuvmwgpckl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610792.1607678-775-420766051131/AnsiballZ_stat.py'
Dec 13 07:26:32 compute-0 sudo[201853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:32 compute-0 python3.9[201855]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:32 compute-0 sudo[201853]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:32 compute-0 sudo[201976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wveddauvdizztwivwqikfmrnklewitpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610792.1607678-775-420766051131/AnsiballZ_copy.py'
Dec 13 07:26:32 compute-0 sudo[201976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:32 compute-0 python3.9[201978]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610792.1607678-775-420766051131/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:32 compute-0 sudo[201976]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:33 compute-0 python3.9[202128]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:26:33 compute-0 ceph-mon[74928]: pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:33 compute-0 sudo[202281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxyieuxmilouexbjjabcbaclfikimfwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610793.486624-981-35830621830484/AnsiballZ_seboolean.py'
Dec 13 07:26:33 compute-0 sudo[202281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:34 compute-0 python3.9[202283]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 13 07:26:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:34 compute-0 sudo[202281]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:35 compute-0 auditd[673]: Audit daemon rotating log files
Dec 13 07:26:35 compute-0 sudo[202437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amzpsazeftgrktxciqtajlrfxfqeuqcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610794.9068916-989-65637730521009/AnsiballZ_copy.py'
Dec 13 07:26:35 compute-0 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 13 07:26:35 compute-0 sudo[202437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:35 compute-0 python3.9[202439]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:35 compute-0 sudo[202437]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:35 compute-0 sudo[202589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emuwtvnlmapvtkqmdvuugctehxniimsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610795.3462343-989-153191667018723/AnsiballZ_copy.py'
Dec 13 07:26:35 compute-0 sudo[202589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:35 compute-0 ceph-mon[74928]: pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:35 compute-0 python3.9[202591]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:35 compute-0 sudo[202589]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:35 compute-0 sudo[202741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zefraipkwssdrbgvgcgaxqjwtgenknru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610795.7848866-989-13965304006742/AnsiballZ_copy.py'
Dec 13 07:26:35 compute-0 sudo[202741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:36 compute-0 python3.9[202743]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:36 compute-0 sudo[202741]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:36 compute-0 sudo[202893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwsmrfljqiqwvjqzqxkvzpjrqgxeusw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610796.2140975-989-118911097606795/AnsiballZ_copy.py'
Dec 13 07:26:36 compute-0 sudo[202893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:36 compute-0 python3.9[202895]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:36 compute-0 sudo[202893]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:36 compute-0 sudo[203045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riboomldrqauegdxbpzqteeyfuhmmbob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610796.648976-989-162680132731292/AnsiballZ_copy.py'
Dec 13 07:26:36 compute-0 sudo[203045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:36 compute-0 python3.9[203047]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:36 compute-0 sudo[203045]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:37 compute-0 sudo[203197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-augiecbezyvehsmcmbykhksslkqmtueb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610797.103168-1025-142939964535833/AnsiballZ_copy.py'
Dec 13 07:26:37 compute-0 sudo[203197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:37 compute-0 python3.9[203199]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:37 compute-0 sudo[203197]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:37 compute-0 ceph-mon[74928]: pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:37 compute-0 sudo[203349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpwoabwggadcrczkblyqmjzpjxqmjemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610797.5322926-1025-137936576575677/AnsiballZ_copy.py'
Dec 13 07:26:37 compute-0 sudo[203349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:37 compute-0 python3.9[203351]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:37 compute-0 sudo[203349]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:38 compute-0 sudo[203501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnvbnooljimrjaebkaqacogcamqqfpuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610797.952488-1025-19596191738131/AnsiballZ_copy.py'
Dec 13 07:26:38 compute-0 sudo[203501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:26:38
Dec 13 07:26:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:26:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:26:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', '.rgw.root']
Dec 13 07:26:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:26:38 compute-0 python3.9[203503]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:38 compute-0 sudo[203501]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:38 compute-0 sudo[203653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwjfcdxvbdbxeubmfreiswmqimsxiyvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610798.3878558-1025-169766739188633/AnsiballZ_copy.py'
Dec 13 07:26:38 compute-0 sudo[203653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:38 compute-0 python3.9[203655]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:38 compute-0 sudo[203653]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:38 compute-0 sudo[203805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwvfoxddxrmnqcjpnuzxmfbqjxzneem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610798.8199952-1025-147264942489997/AnsiballZ_copy.py'
Dec 13 07:26:38 compute-0 sudo[203805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:26:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:26:39 compute-0 python3.9[203807]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:39 compute-0 sudo[203805]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:39 compute-0 sudo[203957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-povyywkqyehupgevywhbyfdvpjiiuhfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610799.2911248-1061-46623821437395/AnsiballZ_systemd.py'
Dec 13 07:26:39 compute-0 sudo[203957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:39 compute-0 ceph-mon[74928]: pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:39 compute-0 python3.9[203959]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:26:39 compute-0 systemd[1]: Reloading.
Dec 13 07:26:39 compute-0 systemd-rc-local-generator[203980]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:26:39 compute-0 systemd-sysv-generator[203983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:26:39 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 13 07:26:39 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 13 07:26:39 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 13 07:26:39 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 13 07:26:40 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 13 07:26:40 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 13 07:26:40 compute-0 sudo[203957]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:40 compute-0 sudo[204150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbpnuaritlntcafkjsijnbhvlnbyfzpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610800.1868606-1061-124171737533736/AnsiballZ_systemd.py'
Dec 13 07:26:40 compute-0 sudo[204150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:40 compute-0 python3.9[204152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:26:40 compute-0 systemd[1]: Reloading.
Dec 13 07:26:40 compute-0 systemd-sysv-generator[204176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:26:40 compute-0 systemd-rc-local-generator[204173]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:26:40 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 13 07:26:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 13 07:26:40 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 13 07:26:40 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 13 07:26:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 13 07:26:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 13 07:26:40 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 07:26:40 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 13 07:26:40 compute-0 sudo[204150]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:41 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 13 07:26:41 compute-0 sudo[204367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvtxkqtwtymdrkamuopdfdnoetbqjljt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610801.0197473-1061-199448959650041/AnsiballZ_systemd.py'
Dec 13 07:26:41 compute-0 sudo[204367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:41 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 13 07:26:41 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 13 07:26:41 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 13 07:26:41 compute-0 python3.9[204369]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:26:41 compute-0 systemd[1]: Reloading.
Dec 13 07:26:41 compute-0 systemd-sysv-generator[204400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:26:41 compute-0 systemd-rc-local-generator[204396]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:26:41 compute-0 ceph-mon[74928]: pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:26:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:26:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:26:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:26:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:26:41.636 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:26:41 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 13 07:26:41 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 13 07:26:41 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 13 07:26:41 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 13 07:26:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 13 07:26:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 13 07:26:41 compute-0 sudo[204367]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:42 compute-0 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 55e08cda-00e9-470e-9032-e41e5c81e568
Dec 13 07:26:42 compute-0 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 13 07:26:42 compute-0 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 55e08cda-00e9-470e-9032-e41e5c81e568
Dec 13 07:26:42 compute-0 setroubleshoot[204262]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 13 07:26:42 compute-0 sudo[204588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccntpptufxyxrnroersobtpthbtrusmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610801.8928158-1061-180778198421194/AnsiballZ_systemd.py'
Dec 13 07:26:42 compute-0 sudo[204588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:42 compute-0 python3.9[204590]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:26:42 compute-0 systemd[1]: Reloading.
Dec 13 07:26:42 compute-0 systemd-rc-local-generator[204616]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:26:42 compute-0 systemd-sysv-generator[204620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:26:42 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 13 07:26:42 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 13 07:26:42 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 07:26:42 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 13 07:26:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 13 07:26:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:42 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 13 07:26:42 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 13 07:26:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 13 07:26:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 13 07:26:42 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 13 07:26:42 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 07:26:42 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 13 07:26:42 compute-0 sudo[204588]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:43 compute-0 sudo[204812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwbdhjesyfwppzzqpmphnvnticdppfld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610802.9207819-1061-47770225455968/AnsiballZ_systemd.py'
Dec 13 07:26:43 compute-0 sudo[204812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:43 compute-0 podman[204777]: 2025-12-13 07:26:43.187015137 +0000 UTC m=+0.045913781 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 07:26:43 compute-0 python3.9[204821]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:26:43 compute-0 systemd[1]: Reloading.
Dec 13 07:26:43 compute-0 systemd-rc-local-generator[204846]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:26:43 compute-0 systemd-sysv-generator[204850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:26:43 compute-0 ceph-mon[74928]: pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:43 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 13 07:26:43 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 13 07:26:43 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 13 07:26:43 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 13 07:26:43 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 13 07:26:43 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 13 07:26:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 13 07:26:43 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 13 07:26:43 compute-0 sudo[204812]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:44 compute-0 sudo[205031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xplitlnnoxlazeojowctudmtyocwtbyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610803.948153-1098-144861675382782/AnsiballZ_file.py'
Dec 13 07:26:44 compute-0 sudo[205031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:44 compute-0 python3.9[205033]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:44 compute-0 sudo[205031]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:44 compute-0 sudo[205183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fveimmbjdzixhhjoljfkaakvgwzjsskl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610804.4270742-1106-217559286844048/AnsiballZ_find.py'
Dec 13 07:26:44 compute-0 sudo[205183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:44 compute-0 python3.9[205185]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 07:26:44 compute-0 sudo[205183]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:45 compute-0 sudo[205335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afvnqfrfrrtybxfgvkplsfruwdfcsxey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610804.896854-1114-202502335321341/AnsiballZ_command.py'
Dec 13 07:26:45 compute-0 sudo[205335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:45 compute-0 python3.9[205337]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:26:45 compute-0 sudo[205335]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:45 compute-0 ceph-mon[74928]: pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:45 compute-0 python3.9[205491]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 07:26:46 compute-0 python3.9[205641]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:46 compute-0 python3.9[205762]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610805.968186-1133-28727704566000/.source.xml follow=False _original_basename=secret.xml.j2 checksum=986bd10345e3383175c34605d56e412042b35351 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:47 compute-0 sudo[205912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qunbsvclrvqcpfurqctwkezjvdrybakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610806.8379388-1148-128227088544833/AnsiballZ_command.py'
Dec 13 07:26:47 compute-0 sudo[205912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:47 compute-0 python3.9[205914]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 00fdae1b-7fad-5f1b-8734-ba4d9298a6de
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:26:47 compute-0 polkitd[43388]: Registered Authentication Agent for unix-process:205916:268078 (system bus name :1.2526 [pkttyagent --process 205916 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 13 07:26:47 compute-0 polkitd[43388]: Unregistered Authentication Agent for unix-process:205916:268078 (system bus name :1.2526, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 13 07:26:47 compute-0 polkitd[43388]: Registered Authentication Agent for unix-process:205915:268077 (system bus name :1.2527 [pkttyagent --process 205915 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 13 07:26:47 compute-0 polkitd[43388]: Unregistered Authentication Agent for unix-process:205915:268077 (system bus name :1.2527, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 13 07:26:47 compute-0 sudo[205912]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:47 compute-0 ceph-mon[74928]: pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:47 compute-0 python3.9[206076]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:48 compute-0 sudo[206226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inhjerzpsncimqcdjfzrxklbuqbahffd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610807.8535635-1164-26179240457603/AnsiballZ_command.py'
Dec 13 07:26:48 compute-0 sudo[206226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:48 compute-0 sudo[206226]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:26:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:26:48 compute-0 sudo[206379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnwamaqoyhjffovcehqorxdpvgmjqrld ; FSID=00fdae1b-7fad-5f1b-8734-ba4d9298a6de KEY=AQDvET1pAAAAABAAXRTVwZkpmvDiKzdXsEX84w== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610808.3376892-1172-270569674932811/AnsiballZ_command.py'
Dec 13 07:26:48 compute-0 sudo[206379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:48 compute-0 polkitd[43388]: Registered Authentication Agent for unix-process:206382:268229 (system bus name :1.2530 [pkttyagent --process 206382 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 13 07:26:48 compute-0 polkitd[43388]: Unregistered Authentication Agent for unix-process:206382:268229 (system bus name :1.2530, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 13 07:26:48 compute-0 sudo[206379]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:49 compute-0 sudo[206537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhfkxbcxgniwoctbdnpspwlhgsanadhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610808.8783393-1180-4572658154810/AnsiballZ_copy.py'
Dec 13 07:26:49 compute-0 sudo[206537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:49 compute-0 python3.9[206539]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:49 compute-0 sudo[206537]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:49 compute-0 ceph-mon[74928]: pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:49 compute-0 sudo[206689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjecyrzpenexxfqcifkoskwyhtgazrrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610809.4419692-1188-251716400389118/AnsiballZ_stat.py'
Dec 13 07:26:49 compute-0 sudo[206689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:49 compute-0 python3.9[206691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:49 compute-0 sudo[206692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:26:49 compute-0 sudo[206692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:49 compute-0 sudo[206692]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:49 compute-0 sudo[206689]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:49 compute-0 sudo[206717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:26:49 compute-0 sudo[206717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:50 compute-0 sudo[206871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxobmwigrwldbbthsoeeupkfggfqrtwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610809.4419692-1188-251716400389118/AnsiballZ_copy.py'
Dec 13 07:26:50 compute-0 sudo[206871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:50 compute-0 python3.9[206876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610809.4419692-1188-251716400389118/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:50 compute-0 sudo[206871]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:50 compute-0 sudo[206717]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:26:50 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:26:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:26:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:26:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:26:50 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:26:50 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:26:50 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:26:50 compute-0 sudo[206918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:26:50 compute-0 sudo[206918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:50 compute-0 sudo[206918]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:50 compute-0 sudo[206943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:26:50 compute-0 sudo[206943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:26:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:26:50 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:26:50 compute-0 sudo[207111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhwkjhhscqjhnmikkbldvnmjnbozaxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610810.4744885-1204-100120971792299/AnsiballZ_file.py'
Dec 13 07:26:50 compute-0 sudo[207111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.664385509 +0000 UTC m=+0.029646662 container create 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:26:50 compute-0 systemd[1]: Started libpod-conmon-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope.
Dec 13 07:26:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.732090324 +0000 UTC m=+0.097351487 container init 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.737829795 +0000 UTC m=+0.103090949 container start 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.738904454 +0000 UTC m=+0.104165597 container attach 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:26:50 compute-0 cool_shtern[207119]: 167 167
Dec 13 07:26:50 compute-0 conmon[207119]: conmon 43fb120376478f4372e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope/container/memory.events
Dec 13 07:26:50 compute-0 systemd[1]: libpod-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope: Deactivated successfully.
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.742622618 +0000 UTC m=+0.107883771 container died 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.653028856 +0000 UTC m=+0.018290019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d55421f5b8d8a7b2c16e9cc095adcd2c890bc99c611698a169b1ef52c6c31191-merged.mount: Deactivated successfully.
Dec 13 07:26:50 compute-0 podman[207086]: 2025-12-13 07:26:50.7609256 +0000 UTC m=+0.126186744 container remove 43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shtern, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:26:50 compute-0 systemd[1]: libpod-conmon-43fb120376478f4372e8b832d798721e0cb2279a20e3e9b102a9ee14add8c9c6.scope: Deactivated successfully.
Dec 13 07:26:50 compute-0 python3.9[207115]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:50 compute-0 sudo[207111]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:50 compute-0 podman[207141]: 2025-12-13 07:26:50.885656921 +0000 UTC m=+0.030465642 container create e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:26:50 compute-0 systemd[1]: Started libpod-conmon-e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d.scope.
Dec 13 07:26:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:50 compute-0 podman[207141]: 2025-12-13 07:26:50.946024442 +0000 UTC m=+0.090833183 container init e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:26:50 compute-0 podman[207141]: 2025-12-13 07:26:50.951079187 +0000 UTC m=+0.095887908 container start e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:26:50 compute-0 podman[207141]: 2025-12-13 07:26:50.952430405 +0000 UTC m=+0.097239126 container attach e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:26:50 compute-0 podman[207141]: 2025-12-13 07:26:50.874371561 +0000 UTC m=+0.019180292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:26:51 compute-0 sudo[207312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqvwpywmrbzwavflqgsgztyqxxvymcfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610810.967728-1212-128922679174685/AnsiballZ_stat.py'
Dec 13 07:26:51 compute-0 sudo[207312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:51 compute-0 modest_moser[207178]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:26:51 compute-0 modest_moser[207178]: --> All data devices are unavailable
Dec 13 07:26:51 compute-0 python3.9[207315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:51 compute-0 systemd[1]: libpod-e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d.scope: Deactivated successfully.
Dec 13 07:26:51 compute-0 sudo[207312]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:51 compute-0 podman[207328]: 2025-12-13 07:26:51.360137528 +0000 UTC m=+0.016894028 container died e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5af597d1f3d65db9bc21c49b7e8ee23e07b310b77673758b7ccd6ba37a6fa28f-merged.mount: Deactivated successfully.
Dec 13 07:26:51 compute-0 podman[207328]: 2025-12-13 07:26:51.385087637 +0000 UTC m=+0.041844137 container remove e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_moser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:26:51 compute-0 systemd[1]: libpod-conmon-e2a8e27a232cb0b8ccc99995a7d6ea8aebf5ecab9d6c5a9c83190cbe717efc4d.scope: Deactivated successfully.
Dec 13 07:26:51 compute-0 sudo[206943]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:51 compute-0 sudo[207376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:26:51 compute-0 sudo[207376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:51 compute-0 sudo[207376]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:51 compute-0 sudo[207440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqwrugzcilbdncbrgdsodjhowqveblbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610810.967728-1212-128922679174685/AnsiballZ_file.py'
Dec 13 07:26:51 compute-0 sudo[207440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:51 compute-0 sudo[207435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:26:51 compute-0 sudo[207435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:51 compute-0 ceph-mon[74928]: pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:51 compute-0 python3.9[207459]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:51 compute-0 sudo[207440]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.732487692 +0000 UTC m=+0.032125099 container create 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:26:51 compute-0 systemd[1]: Started libpod-conmon-6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b.scope.
Dec 13 07:26:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.776280798 +0000 UTC m=+0.075918224 container init 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.781959926 +0000 UTC m=+0.081597332 container start 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.783060805 +0000 UTC m=+0.082698211 container attach 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 07:26:51 compute-0 cranky_davinci[207513]: 167 167
Dec 13 07:26:51 compute-0 systemd[1]: libpod-6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b.scope: Deactivated successfully.
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.786052886 +0000 UTC m=+0.085690291 container died 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c48366593077afdbfd905c30938b275d06cc31e0580eab932b192b398585849-merged.mount: Deactivated successfully.
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.80571978 +0000 UTC m=+0.105357186 container remove 6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:26:51 compute-0 podman[207483]: 2025-12-13 07:26:51.721166305 +0000 UTC m=+0.020803732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:26:51 compute-0 systemd[1]: libpod-conmon-6f57f3d2e27be8fb222470354e8cdaac51875a7267339b36dbee2c656122c86b.scope: Deactivated successfully.
Dec 13 07:26:51 compute-0 podman[207610]: 2025-12-13 07:26:51.925815761 +0000 UTC m=+0.027561426 container create 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:26:51 compute-0 systemd[1]: Started libpod-conmon-7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac.scope.
Dec 13 07:26:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:51 compute-0 podman[207610]: 2025-12-13 07:26:51.981636933 +0000 UTC m=+0.083382606 container init 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:26:51 compute-0 podman[207610]: 2025-12-13 07:26:51.987913493 +0000 UTC m=+0.089659147 container start 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:26:51 compute-0 podman[207610]: 2025-12-13 07:26:51.990140647 +0000 UTC m=+0.091886311 container attach 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:26:52 compute-0 podman[207610]: 2025-12-13 07:26:51.91507238 +0000 UTC m=+0.016818054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:26:52 compute-0 sudo[207678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvoarqwdijncefkxeghzcqieglhzhxgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610811.7875001-1224-46575764213310/AnsiballZ_stat.py'
Dec 13 07:26:52 compute-0 sudo[207678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:52 compute-0 python3.9[207680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:52 compute-0 sudo[207678]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]: {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:     "0": [
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:         {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "devices": [
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "/dev/loop3"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             ],
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_name": "ceph_lv0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_size": "21470642176",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "name": "ceph_lv0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "tags": {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cluster_name": "ceph",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.crush_device_class": "",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.encrypted": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.objectstore": "bluestore",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osd_id": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.type": "block",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.vdo": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.with_tpm": "0"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             },
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "type": "block",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "vg_name": "ceph_vg0"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:         }
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:     ],
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:     "1": [
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:         {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "devices": [
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "/dev/loop4"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             ],
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_name": "ceph_lv1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_size": "21470642176",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "name": "ceph_lv1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "tags": {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cluster_name": "ceph",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.crush_device_class": "",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.encrypted": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.objectstore": "bluestore",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osd_id": "1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.type": "block",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.vdo": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.with_tpm": "0"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             },
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "type": "block",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "vg_name": "ceph_vg1"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:         }
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:     ],
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:     "2": [
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:         {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "devices": [
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "/dev/loop5"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             ],
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_name": "ceph_lv2",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_size": "21470642176",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "name": "ceph_lv2",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "tags": {
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.cluster_name": "ceph",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.crush_device_class": "",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.encrypted": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.objectstore": "bluestore",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osd_id": "2",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.type": "block",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.vdo": "0",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:                 "ceph.with_tpm": "0"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             },
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "type": "block",
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:             "vg_name": "ceph_vg2"
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:         }
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]:     ]
Dec 13 07:26:52 compute-0 vigilant_wescoff[207647]: }
Dec 13 07:26:52 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 13 07:26:52 compute-0 podman[207610]: 2025-12-13 07:26:52.242199703 +0000 UTC m=+0.343945357 container died 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:26:52 compute-0 systemd[1]: libpod-7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac.scope: Deactivated successfully.
Dec 13 07:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd4267b91c5c2089d27140b508b761e7d64215e1b573b8e94097eb1e740da499-merged.mount: Deactivated successfully.
Dec 13 07:26:52 compute-0 podman[207610]: 2025-12-13 07:26:52.265284758 +0000 UTC m=+0.367030412 container remove 7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:26:52 compute-0 systemd[1]: libpod-conmon-7a14f472295a2e670d41b7bde58cd6d27b690485c52a03f338042c9830b09fac.scope: Deactivated successfully.
Dec 13 07:26:52 compute-0 sudo[207435]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:52 compute-0 sudo[207744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:26:52 compute-0 sudo[207744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:52 compute-0 sudo[207744]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:52 compute-0 sudo[207793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smlbxkfxnzailwowgbpslxqqzrnstxpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610811.7875001-1224-46575764213310/AnsiballZ_file.py'
Dec 13 07:26:52 compute-0 sudo[207793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:52 compute-0 sudo[207797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:26:52 compute-0 sudo[207797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:52 compute-0 python3.9[207800]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.42gwbqr_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:52 compute-0 sudo[207793]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.605948659 +0000 UTC m=+0.027811977 container create 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:26:52 compute-0 systemd[1]: Started libpod-conmon-37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9.scope.
Dec 13 07:26:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.657965764 +0000 UTC m=+0.079829091 container init 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.662891758 +0000 UTC m=+0.084755074 container start 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.663942331 +0000 UTC m=+0.085805648 container attach 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:26:52 compute-0 boring_thompson[207875]: 167 167
Dec 13 07:26:52 compute-0 systemd[1]: libpod-37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9.scope: Deactivated successfully.
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.667490116 +0000 UTC m=+0.089353453 container died 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.686316813 +0000 UTC m=+0.108180130 container remove 37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:26:52 compute-0 podman[207846]: 2025-12-13 07:26:52.595322038 +0000 UTC m=+0.017185374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:26:52 compute-0 systemd[1]: libpod-conmon-37b7916ea764c3d936429503949f8c566375f558997ef8f5e02a8630ae8397a9.scope: Deactivated successfully.
Dec 13 07:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-883ff7f6ee448938cdd3c436b832c005bec9caf7c3c1d3c2bd89ed590ac870d3-merged.mount: Deactivated successfully.
Dec 13 07:26:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:52 compute-0 podman[207967]: 2025-12-13 07:26:52.807430025 +0000 UTC m=+0.027394321 container create 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:26:52 compute-0 systemd[1]: Started libpod-conmon-47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312.scope.
Dec 13 07:26:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:26:52 compute-0 podman[207967]: 2025-12-13 07:26:52.862855803 +0000 UTC m=+0.082820119 container init 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:26:52 compute-0 podman[207967]: 2025-12-13 07:26:52.868505155 +0000 UTC m=+0.088469452 container start 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:26:52 compute-0 sudo[208033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngmbbblskbsqfqwhdlrvyprlalfwqine ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610812.6595457-1236-8212029509244/AnsiballZ_stat.py'
Dec 13 07:26:52 compute-0 podman[207967]: 2025-12-13 07:26:52.871663688 +0000 UTC m=+0.091628005 container attach 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:26:52 compute-0 sudo[208033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:52 compute-0 podman[207967]: 2025-12-13 07:26:52.796303293 +0000 UTC m=+0.016267599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:26:53 compute-0 python3.9[208037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:53 compute-0 sudo[208033]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:53 compute-0 sudo[208171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziekkedzmemfoxvezamflhdefxclobvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610812.6595457-1236-8212029509244/AnsiballZ_file.py'
Dec 13 07:26:53 compute-0 sudo[208171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:53 compute-0 podman[208125]: 2025-12-13 07:26:53.327391579 +0000 UTC m=+0.105729024 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 07:26:53 compute-0 lvm[208212]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:26:53 compute-0 lvm[208211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:26:53 compute-0 lvm[208211]: VG ceph_vg0 finished
Dec 13 07:26:53 compute-0 lvm[208212]: VG ceph_vg1 finished
Dec 13 07:26:53 compute-0 python3.9[208177]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:53 compute-0 lvm[208215]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:26:53 compute-0 lvm[208215]: VG ceph_vg2 finished
Dec 13 07:26:53 compute-0 sudo[208171]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:53 compute-0 hardcore_morse[208011]: {}
Dec 13 07:26:53 compute-0 lvm[208218]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:26:53 compute-0 lvm[208218]: VG ceph_vg2 finished
Dec 13 07:26:53 compute-0 systemd[1]: libpod-47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312.scope: Deactivated successfully.
Dec 13 07:26:53 compute-0 podman[207967]: 2025-12-13 07:26:53.489037964 +0000 UTC m=+0.709002259 container died 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a39c763b7ec6d90bdf0ec7a4c7c1e36b7118481296940b54d2b6b28d38b89af4-merged.mount: Deactivated successfully.
Dec 13 07:26:53 compute-0 podman[207967]: 2025-12-13 07:26:53.51324654 +0000 UTC m=+0.733210836 container remove 47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_morse, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:26:53 compute-0 systemd[1]: libpod-conmon-47e218630f3a946e53d6cd91126421ea325ac4e6edecbfb2dcf68ad81c473312.scope: Deactivated successfully.
Dec 13 07:26:53 compute-0 sudo[207797]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:26:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:26:53 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:26:53 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:26:53 compute-0 sudo[208253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:26:53 compute-0 sudo[208253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:26:53 compute-0 sudo[208253]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:53 compute-0 ceph-mon[74928]: pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:26:53 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:26:53 compute-0 sudo[208403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwelefidaoqkmeywrlwgzxkznhgmqifw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610813.6278892-1249-189656736681132/AnsiballZ_command.py'
Dec 13 07:26:53 compute-0 sudo[208403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:53 compute-0 python3.9[208405]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:26:53 compute-0 sudo[208403]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:54 compute-0 sudo[208556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubuxjoruathcemexaqyhdnrosdtjklfl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610814.1142976-1257-54081528516780/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 07:26:54 compute-0 sudo[208556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:54 compute-0 python3[208558]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 07:26:54 compute-0 sudo[208556]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:54 compute-0 sudo[208708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqrqpymfksugehuyazzusgwnsoqhfgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610814.716674-1265-21425994032204/AnsiballZ_stat.py'
Dec 13 07:26:54 compute-0 sudo[208708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:55 compute-0 python3.9[208710]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:55 compute-0 sudo[208708]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:55 compute-0 sudo[208786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujfdhidnmuquetedvxqqbdjmylkhqhss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610814.716674-1265-21425994032204/AnsiballZ_file.py'
Dec 13 07:26:55 compute-0 sudo[208786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:55 compute-0 python3.9[208788]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:55 compute-0 sudo[208786]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:55 compute-0 ceph-mon[74928]: pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:55 compute-0 sudo[208938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erizmenyvucmuedyvmimbopmjrrevffe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610815.6254666-1277-170713811237401/AnsiballZ_stat.py'
Dec 13 07:26:55 compute-0 sudo[208938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:55 compute-0 python3.9[208940]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:56 compute-0 sudo[208938]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:56 compute-0 sudo[209016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwvqxdcohrvfspyazjegvsrjopxyjkly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610815.6254666-1277-170713811237401/AnsiballZ_file.py'
Dec 13 07:26:56 compute-0 sudo[209016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:56 compute-0 python3.9[209018]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:56 compute-0 sudo[209016]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:56 compute-0 sudo[209168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzusajiswpybsqqinuwvdqtgdldeeguh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610816.4352224-1289-123807888756549/AnsiballZ_stat.py'
Dec 13 07:26:56 compute-0 sudo[209168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:56 compute-0 python3.9[209170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:56 compute-0 sudo[209168]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:56 compute-0 sudo[209246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgrnvakqshkpbsczedimzlqejkodtxzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610816.4352224-1289-123807888756549/AnsiballZ_file.py'
Dec 13 07:26:56 compute-0 sudo[209246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:57 compute-0 python3.9[209248]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:57 compute-0 sudo[209246]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:57 compute-0 sudo[209398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbnjirhznsejgxoikdiwmfawmsjsihfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610817.239914-1301-160406865577487/AnsiballZ_stat.py'
Dec 13 07:26:57 compute-0 sudo[209398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:57 compute-0 python3.9[209400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:57 compute-0 ceph-mon[74928]: pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:57 compute-0 sudo[209398]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:57 compute-0 sudo[209476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xblmljryptccobozyztdqerhztdravei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610817.239914-1301-160406865577487/AnsiballZ_file.py'
Dec 13 07:26:57 compute-0 sudo[209476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:26:57 compute-0 python3.9[209478]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:57 compute-0 sudo[209476]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:58 compute-0 sudo[209628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaplafzvcxhcwrrupbdieidquteepnvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610818.048048-1313-24626378437614/AnsiballZ_stat.py'
Dec 13 07:26:58 compute-0 sudo[209628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:58 compute-0 python3.9[209630]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:26:58 compute-0 sudo[209628]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:58 compute-0 sudo[209753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzmumcuttvdidonwechzacvhutmefmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610818.048048-1313-24626378437614/AnsiballZ_copy.py'
Dec 13 07:26:58 compute-0 sudo[209753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:58 compute-0 python3.9[209755]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610818.048048-1313-24626378437614/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:58 compute-0 sudo[209753]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:59 compute-0 sudo[209905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiverunithaqnvqnlmgbglpkofambezi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610819.0044334-1328-43940734486636/AnsiballZ_file.py'
Dec 13 07:26:59 compute-0 sudo[209905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:59 compute-0 python3.9[209907]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:26:59 compute-0 sudo[209905]: pam_unix(sudo:session): session closed for user root
Dec 13 07:26:59 compute-0 ceph-mon[74928]: pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:26:59 compute-0 sudo[210057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpyvgfgmxqukxotwjlwqmhpjjmduipq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610819.474957-1336-90293647640043/AnsiballZ_command.py'
Dec 13 07:26:59 compute-0 sudo[210057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:26:59 compute-0 python3.9[210059]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:26:59 compute-0 sudo[210057]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:00 compute-0 sudo[210212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbapqscijxsmnytbozwpqzeaktuvruq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610820.0344834-1344-165311467798186/AnsiballZ_blockinfile.py'
Dec 13 07:27:00 compute-0 sudo[210212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:00 compute-0 python3.9[210214]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:00 compute-0 sudo[210212]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:00 compute-0 sudo[210364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnpgyyajxlcwqtdqovpaxysacwgalqmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610820.686573-1353-79503678244401/AnsiballZ_command.py'
Dec 13 07:27:00 compute-0 sudo[210364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:01 compute-0 python3.9[210366]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:27:01 compute-0 sudo[210364]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:01 compute-0 sudo[210517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kelunaqnwqgzfsagpuaurjasobdevrlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610821.1463878-1361-46921945452818/AnsiballZ_stat.py'
Dec 13 07:27:01 compute-0 sudo[210517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:01 compute-0 python3.9[210519]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:01 compute-0 sudo[210517]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:01 compute-0 ceph-mon[74928]: pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:01 compute-0 sudo[210671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipubflaebrayxvdvjklnujnrfagwyfwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610821.601992-1369-51251328065208/AnsiballZ_command.py'
Dec 13 07:27:01 compute-0 sudo[210671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:01 compute-0 python3.9[210673]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:27:01 compute-0 sudo[210671]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:02 compute-0 sudo[210826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zddgjkewpongbiwfcuksmhplmuganfxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610822.0787528-1377-158594598803709/AnsiballZ_file.py'
Dec 13 07:27:02 compute-0 sudo[210826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:02 compute-0 python3.9[210828]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:02 compute-0 sudo[210826]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:02 compute-0 sudo[210978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbarofhknznhmjxwwjakyamqdhszywqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610822.535394-1385-2114878408640/AnsiballZ_stat.py'
Dec 13 07:27:02 compute-0 sudo[210978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:02 compute-0 python3.9[210980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:02 compute-0 sudo[210978]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:03 compute-0 sudo[211101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynllsikckuqaaeuguiyiybvujjibfkuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610822.535394-1385-2114878408640/AnsiballZ_copy.py'
Dec 13 07:27:03 compute-0 sudo[211101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:03 compute-0 python3.9[211103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610822.535394-1385-2114878408640/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:03 compute-0 sudo[211101]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:03 compute-0 sudo[211253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgacpjapzczqwdusimcounuwmgtkkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610823.3537269-1400-261279812342670/AnsiballZ_stat.py'
Dec 13 07:27:03 compute-0 sudo[211253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:03 compute-0 ceph-mon[74928]: pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:03 compute-0 python3.9[211255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:03 compute-0 sudo[211253]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:03 compute-0 sudo[211376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhpytmleyvvtutponxvpzulcmvimzkbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610823.3537269-1400-261279812342670/AnsiballZ_copy.py'
Dec 13 07:27:03 compute-0 sudo[211376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:04 compute-0 python3.9[211378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610823.3537269-1400-261279812342670/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:04 compute-0 sudo[211376]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:04 compute-0 sudo[211528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwwjwnwxfssvgaspqaqrthcobdycisar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610824.1968408-1415-196765772484420/AnsiballZ_stat.py'
Dec 13 07:27:04 compute-0 sudo[211528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:04 compute-0 python3.9[211530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:04 compute-0 sudo[211528]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:04 compute-0 sudo[211651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvyphoacqhabclrocuhvekzebqfhpzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610824.1968408-1415-196765772484420/AnsiballZ_copy.py'
Dec 13 07:27:04 compute-0 sudo[211651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:04 compute-0 python3.9[211653]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610824.1968408-1415-196765772484420/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:04 compute-0 sudo[211651]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:05 compute-0 sudo[211803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czmievqnxeiuelwnpgrmckzkbcxqhjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610825.0713568-1430-243355084617702/AnsiballZ_systemd.py'
Dec 13 07:27:05 compute-0 sudo[211803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:05 compute-0 python3.9[211805]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:27:05 compute-0 systemd[1]: Reloading.
Dec 13 07:27:05 compute-0 systemd-sysv-generator[211829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:27:05 compute-0 systemd-rc-local-generator[211826]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:27:05 compute-0 ceph-mon[74928]: pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:05 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 13 07:27:05 compute-0 sudo[211803]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:06 compute-0 sudo[211993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjykkqqfxuqjpkcchmeufqphdmtdhatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610825.943663-1438-67982803302588/AnsiballZ_systemd.py'
Dec 13 07:27:06 compute-0 sudo[211993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:06 compute-0 python3.9[211995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 07:27:06 compute-0 systemd[1]: Reloading.
Dec 13 07:27:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:06 compute-0 systemd-rc-local-generator[212022]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:27:06 compute-0 systemd-sysv-generator[212026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:27:06 compute-0 systemd[1]: Reloading.
Dec 13 07:27:06 compute-0 systemd-rc-local-generator[212050]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:27:06 compute-0 systemd-sysv-generator[212053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:27:06 compute-0 sudo[211993]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:07 compute-0 sshd-session[154237]: Connection closed by 192.168.122.30 port 39060
Dec 13 07:27:07 compute-0 sshd-session[154234]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:27:07 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 13 07:27:07 compute-0 systemd[1]: session-51.scope: Consumed 2min 26.137s CPU time.
Dec 13 07:27:07 compute-0 systemd-logind[745]: Session 51 logged out. Waiting for processes to exit.
Dec 13 07:27:07 compute-0 systemd-logind[745]: Removed session 51.
Dec 13 07:27:07 compute-0 ceph-mon[74928]: pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:27:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:27:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:27:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:27:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:27:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:27:09 compute-0 ceph-mon[74928]: pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:11 compute-0 ceph-mon[74928]: pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:12 compute-0 sshd-session[212092]: Accepted publickey for zuul from 192.168.122.30 port 52180 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:27:12 compute-0 systemd-logind[745]: New session 52 of user zuul.
Dec 13 07:27:12 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 13 07:27:12 compute-0 sshd-session[212092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:27:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:13 compute-0 python3.9[212245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:27:13 compute-0 ceph-mon[74928]: pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:13 compute-0 podman[212326]: 2025-12-13 07:27:13.703979343 +0000 UTC m=+0.044577010 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 07:27:14 compute-0 python3.9[212415]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:27:14 compute-0 network[212432]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:27:14 compute-0 network[212433]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:27:14 compute-0 network[212434]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:27:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:15 compute-0 ceph-mon[74928]: pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:16 compute-0 sudo[212704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibimkdzohmzwzplivhgbbbrnlwohhcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610836.384512-47-239420247771752/AnsiballZ_setup.py'
Dec 13 07:27:16 compute-0 sudo[212704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:16 compute-0 python3.9[212706]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 07:27:17 compute-0 sudo[212704]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:17 compute-0 sudo[212788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxkcjcvuctmeocwqssojpevhaplawca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610836.384512-47-239420247771752/AnsiballZ_dnf.py'
Dec 13 07:27:17 compute-0 sudo[212788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:17 compute-0 python3.9[212790]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:27:17 compute-0 ceph-mon[74928]: pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:19 compute-0 ceph-mon[74928]: pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:21 compute-0 ceph-mon[74928]: pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:21 compute-0 sudo[212788]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:22 compute-0 sudo[212941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlireucwpvgpcogkblpojqpchhhokzhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610841.970461-59-169618392637729/AnsiballZ_stat.py'
Dec 13 07:27:22 compute-0 sudo[212941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:22 compute-0 python3.9[212943]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:22 compute-0 sudo[212941]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:22 compute-0 sudo[213093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpygbqtpyfpgocwnuphbtfucwaeyvfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610842.598166-69-168952853376433/AnsiballZ_command.py'
Dec 13 07:27:22 compute-0 sudo[213093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:23 compute-0 python3.9[213095]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:27:23 compute-0 sudo[213093]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:23 compute-0 sudo[213255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakjieblzhrreeyqgphqyfisauoolgyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610843.261367-79-246027690284441/AnsiballZ_stat.py'
Dec 13 07:27:23 compute-0 sudo[213255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:23 compute-0 podman[213220]: 2025-12-13 07:27:23.482174597 +0000 UTC m=+0.061544045 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:27:23 compute-0 python3.9[213265]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:23 compute-0 sudo[213255]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:23 compute-0 ceph-mon[74928]: pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:23 compute-0 sudo[213422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkelnftqssjezccdgnjncmhffjdblkjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610843.7264805-87-27326917511139/AnsiballZ_command.py'
Dec 13 07:27:23 compute-0 sudo[213422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:24 compute-0 python3.9[213424]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:27:24 compute-0 sudo[213422]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:24 compute-0 sudo[213575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmruvczgbfaffhavkscayddzasvaeml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610844.1860075-95-182838216523832/AnsiballZ_stat.py'
Dec 13 07:27:24 compute-0 sudo[213575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:24 compute-0 python3.9[213577]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:24 compute-0 sudo[213575]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:24 compute-0 sudo[213698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiviuevqhxiyesjbvhbvyredzsmcthdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610844.1860075-95-182838216523832/AnsiballZ_copy.py'
Dec 13 07:27:24 compute-0 sudo[213698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:25 compute-0 python3.9[213700]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610844.1860075-95-182838216523832/.source.iscsi _original_basename=.xzz_57b9 follow=False checksum=676550f67cdc4f2536cb95e8274b343680dfe66f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:25 compute-0 sudo[213698]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:25 compute-0 sudo[213850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjwaolflqwzryjacrkwifepwevdtowcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610845.2454844-110-144782443743918/AnsiballZ_file.py'
Dec 13 07:27:25 compute-0 sudo[213850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:25 compute-0 ceph-mon[74928]: pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:25 compute-0 python3.9[213852]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:25 compute-0 sudo[213850]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:26 compute-0 sudo[214002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temrrafpprphcynxgrzitgwtcbvwlirn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610845.8407247-118-13226945340070/AnsiballZ_lineinfile.py'
Dec 13 07:27:26 compute-0 sudo[214002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:26 compute-0 python3.9[214004]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:26 compute-0 sudo[214002]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:26 compute-0 sudo[214154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljytiuqxwrldueoskvtiehuzuqjvfxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610846.4500506-127-83551018930262/AnsiballZ_systemd_service.py'
Dec 13 07:27:26 compute-0 sudo[214154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:27 compute-0 python3.9[214156]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:27:27 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 13 07:27:27 compute-0 sudo[214154]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:27 compute-0 sudo[214310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdjjbhegkaibaucathwtpoymkramkxwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610847.2958763-135-96488505112686/AnsiballZ_systemd_service.py'
Dec 13 07:27:27 compute-0 sudo[214310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:27 compute-0 ceph-mon[74928]: pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:27 compute-0 python3.9[214312]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:27:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:27 compute-0 systemd[1]: Reloading.
Dec 13 07:27:27 compute-0 systemd-sysv-generator[214341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:27:27 compute-0 systemd-rc-local-generator[214338]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:27:28 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 07:27:28 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 13 07:27:28 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 07:27:28 compute-0 systemd[1]: Started Open-iSCSI.
Dec 13 07:27:28 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 13 07:27:28 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 13 07:27:28 compute-0 sudo[214310]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:28 compute-0 sudo[214510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkzcwnqnnoukdjmqbqhlwfhdwgrwiuhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610848.3269343-146-217172033165817/AnsiballZ_service_facts.py'
Dec 13 07:27:28 compute-0 sudo[214510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:28 compute-0 python3.9[214512]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:27:28 compute-0 network[214529]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:27:28 compute-0 network[214530]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:27:28 compute-0 network[214531]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:27:29 compute-0 ceph-mon[74928]: pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:30 compute-0 sudo[214510]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:31 compute-0 sudo[214801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pprfvseyxuuxbdpapvjdbagmogfajzgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610851.0085196-156-77502374434162/AnsiballZ_file.py'
Dec 13 07:27:31 compute-0 sudo[214801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:31 compute-0 python3.9[214803]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 07:27:31 compute-0 sudo[214801]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:31 compute-0 ceph-mon[74928]: pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:31 compute-0 sudo[214953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkqnzqlhbmqgqmfbdiiprsqwwkihutru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610851.49111-164-97325057088182/AnsiballZ_modprobe.py'
Dec 13 07:27:31 compute-0 sudo[214953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:31 compute-0 python3.9[214955]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 13 07:27:31 compute-0 sudo[214953]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:32 compute-0 sudo[215109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sddhpvvvpdnfcgyvrsbvakpctodkdgiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610852.1092741-172-32192198607542/AnsiballZ_stat.py'
Dec 13 07:27:32 compute-0 sudo[215109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:32 compute-0 python3.9[215111]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:32 compute-0 sudo[215109]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:32 compute-0 sudo[215232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbsygeudbtdgbzlkdssbkcsfdlahidie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610852.1092741-172-32192198607542/AnsiballZ_copy.py'
Dec 13 07:27:32 compute-0 sudo[215232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:32 compute-0 python3.9[215234]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610852.1092741-172-32192198607542/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:32 compute-0 sudo[215232]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:33 compute-0 sudo[215384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccfzqrgzzmpchuajoxlvkgnwycwezmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610853.0053678-188-167007900694715/AnsiballZ_lineinfile.py'
Dec 13 07:27:33 compute-0 sudo[215384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:33 compute-0 python3.9[215386]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:33 compute-0 sudo[215384]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:33 compute-0 ceph-mon[74928]: pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:34 compute-0 sudo[215536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefkcbtoznnhayfyicbsgoyjfshilodg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610853.5107744-196-261844735866626/AnsiballZ_systemd.py'
Dec 13 07:27:34 compute-0 sudo[215536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:34 compute-0 python3.9[215538]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:27:34 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 07:27:34 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 13 07:27:34 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 13 07:27:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 13 07:27:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 13 07:27:34 compute-0 sudo[215536]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:34 compute-0 sudo[215692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euwqivvtzfydyuablyqtgifeajlygkbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610854.441935-204-273493832882650/AnsiballZ_file.py'
Dec 13 07:27:34 compute-0 sudo[215692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:34 compute-0 python3.9[215694]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:34 compute-0 sudo[215692]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:35 compute-0 sudo[215844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzmlziiujsuexpqbglpnkbspahiylusr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610854.9503922-213-223060584673001/AnsiballZ_stat.py'
Dec 13 07:27:35 compute-0 sudo[215844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:35 compute-0 python3.9[215846]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:35 compute-0 sudo[215844]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:35 compute-0 sudo[215996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpkbyrymiujzypytdnxvatiacjjrnvzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610855.4427528-222-114544089737141/AnsiballZ_stat.py'
Dec 13 07:27:35 compute-0 sudo[215996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:35 compute-0 ceph-mon[74928]: pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:35 compute-0 python3.9[215998]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:35 compute-0 sudo[215996]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:36 compute-0 sudo[216148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mehfqlbjejnfkvbleidekvlqbnlyblpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610855.906437-230-33455022249584/AnsiballZ_stat.py'
Dec 13 07:27:36 compute-0 sudo[216148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:36 compute-0 python3.9[216150]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:36 compute-0 sudo[216148]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:36 compute-0 sudo[216271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cacsekbuicmeybfjcykpohhenzxteqld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610855.906437-230-33455022249584/AnsiballZ_copy.py'
Dec 13 07:27:36 compute-0 sudo[216271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:36 compute-0 python3.9[216273]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610855.906437-230-33455022249584/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:36 compute-0 sudo[216271]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:36 compute-0 sudo[216423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oynnykjljitrdweqvdtrlkxkyfcfahru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610856.771393-245-239059493247931/AnsiballZ_command.py'
Dec 13 07:27:36 compute-0 sudo[216423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:37 compute-0 python3.9[216425]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:27:37 compute-0 sudo[216423]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:37 compute-0 sudo[216576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyaiwjitqqfvtrzcxzntgysngiytyaoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610857.2675831-253-142714805260451/AnsiballZ_lineinfile.py'
Dec 13 07:27:37 compute-0 sudo[216576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:37 compute-0 python3.9[216578]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:37 compute-0 sudo[216576]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:37 compute-0 ceph-mon[74928]: pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:38 compute-0 sudo[216728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvzswrjnuodvyyjfkbcjxkwjaalnukuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610857.7476068-261-240789326130834/AnsiballZ_replace.py'
Dec 13 07:27:38 compute-0 sudo[216728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:27:38
Dec 13 07:27:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:27:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:27:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', '.mgr']
Dec 13 07:27:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:27:38 compute-0 python3.9[216730]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:38 compute-0 sudo[216728]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:38 compute-0 sudo[216880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkkfnfduicvnwrdcjkxftwqexssymrru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610858.4662724-269-63464761233639/AnsiballZ_replace.py'
Dec 13 07:27:38 compute-0 sudo[216880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:38 compute-0 python3.9[216882]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:38 compute-0 sudo[216880]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:27:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:27:39 compute-0 sudo[217032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgaaushckwpoqmkmpshzrwlzwmqcwxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610858.9623973-278-276391534151539/AnsiballZ_lineinfile.py'
Dec 13 07:27:39 compute-0 sudo[217032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:39 compute-0 python3.9[217034]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:39 compute-0 sudo[217032]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:39 compute-0 sudo[217184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nafmwcpfkfgyfiavszfyjkffirngvyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610859.4009898-278-218470155431336/AnsiballZ_lineinfile.py'
Dec 13 07:27:39 compute-0 sudo[217184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:39 compute-0 ceph-mon[74928]: pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:39 compute-0 python3.9[217186]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:39 compute-0 sudo[217184]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:39 compute-0 sudo[217336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsnderdphksbqqilzjwtsrurnktsliez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610859.8292081-278-244118820953430/AnsiballZ_lineinfile.py'
Dec 13 07:27:39 compute-0 sudo[217336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:40 compute-0 python3.9[217338]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:40 compute-0 sudo[217336]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:40 compute-0 sudo[217488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enzitayfqichdudyiyaknoludyuzxiwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610860.2533085-278-98703687904522/AnsiballZ_lineinfile.py'
Dec 13 07:27:40 compute-0 sudo[217488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:40 compute-0 python3.9[217490]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:40 compute-0 sudo[217488]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.697782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860697807, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3582461, "memory_usage": 3630904, "flush_reason": "Manual Compaction"}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860704177, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3506243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9690, "largest_seqno": 11727, "table_properties": {"data_size": 3496930, "index_size": 5935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17751, "raw_average_key_size": 19, "raw_value_size": 3478568, "raw_average_value_size": 3810, "num_data_blocks": 269, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610625, "oldest_key_time": 1765610625, "file_creation_time": 1765610860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6430 microseconds, and 4973 cpu microseconds.
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704213) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3506243 bytes OK
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704226) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704617) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704628) EVENT_LOG_v1 {"time_micros": 1765610860704625, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.704638) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3573968, prev total WAL file size 3573968, number of live WAL files 2.
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.705261) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3424KB)], [26(6153KB)]
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860705283, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9807221, "oldest_snapshot_seqno": -1}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3691 keys, 8220298 bytes, temperature: kUnknown
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860721787, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8220298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8191712, "index_size": 18236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88610, "raw_average_key_size": 24, "raw_value_size": 8121187, "raw_average_value_size": 2200, "num_data_blocks": 792, "num_entries": 3691, "num_filter_entries": 3691, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765610860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.721905) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8220298 bytes
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.722195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 593.1 rd, 497.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4205, records dropped: 514 output_compression: NoCompression
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.722208) EVENT_LOG_v1 {"time_micros": 1765610860722202, "job": 10, "event": "compaction_finished", "compaction_time_micros": 16536, "compaction_time_cpu_micros": 13344, "output_level": 6, "num_output_files": 1, "total_output_size": 8220298, "num_input_records": 4205, "num_output_records": 3691, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860722668, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765610860723405, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.705225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:27:40 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:27:40.723455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:27:40 compute-0 sudo[217640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gayjevtcxkyksregpdrkjzsicwlfssxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610860.7022789-307-39300667925806/AnsiballZ_stat.py'
Dec 13 07:27:40 compute-0 sudo[217640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:41 compute-0 python3.9[217642]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:41 compute-0 sudo[217640]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:41 compute-0 sudo[217794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drahjqurqjgfbtzcgegsjjmpfehchlkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610861.1794899-315-248272071977342/AnsiballZ_file.py'
Dec 13 07:27:41 compute-0 sudo[217794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:41 compute-0 python3.9[217796]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:41 compute-0 sudo[217794]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:27:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:27:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:27:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:27:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:27:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:27:41 compute-0 ceph-mon[74928]: pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:41 compute-0 sudo[217946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsqlxbqhjsqkljbwftdhepcoajipxlmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610861.7431288-324-277308112241917/AnsiballZ_file.py'
Dec 13 07:27:41 compute-0 sudo[217946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:42 compute-0 python3.9[217948]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:42 compute-0 sudo[217946]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:42 compute-0 sudo[218098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qejdqrehymgzkgkwnnelimyszgvnamec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610862.2262473-332-195090899100550/AnsiballZ_stat.py'
Dec 13 07:27:42 compute-0 sudo[218098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:42 compute-0 python3.9[218100]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:42 compute-0 sudo[218098]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:42 compute-0 sudo[218176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hztdahxzeestfgijznvimskviifjxlcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610862.2262473-332-195090899100550/AnsiballZ_file.py'
Dec 13 07:27:42 compute-0 sudo[218176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:42 compute-0 python3.9[218178]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:42 compute-0 sudo[218176]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:43 compute-0 sudo[218328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewfhxrslnegihmhxsjdbkhjdggjewhna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610863.057546-332-279288924219937/AnsiballZ_stat.py'
Dec 13 07:27:43 compute-0 sudo[218328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:43 compute-0 python3.9[218330]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:43 compute-0 sudo[218328]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:43 compute-0 sudo[218406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dltvhpfoiximyboxpyklakegogaudeqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610863.057546-332-279288924219937/AnsiballZ_file.py'
Dec 13 07:27:43 compute-0 sudo[218406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:43 compute-0 ceph-mon[74928]: pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:43 compute-0 python3.9[218408]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:43 compute-0 sudo[218406]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:44 compute-0 sudo[218565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxlfuuavvybczgorsisztpgxadsynvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610863.847656-355-209392354584232/AnsiballZ_file.py'
Dec 13 07:27:44 compute-0 sudo[218565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:44 compute-0 podman[218532]: 2025-12-13 07:27:44.046487332 +0000 UTC m=+0.040934021 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 13 07:27:44 compute-0 python3.9[218574]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:44 compute-0 sudo[218565]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:44 compute-0 sudo[218728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-totzsfzpxmkkommzcujrneerfjubvnut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610864.3366284-363-260762958767611/AnsiballZ_stat.py'
Dec 13 07:27:44 compute-0 sudo[218728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:44 compute-0 python3.9[218730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:44 compute-0 ceph-mon[74928]: pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:44 compute-0 sudo[218728]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:44 compute-0 sudo[218806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvwqkshpqjdvturhtmjiuffmddivhsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610864.3366284-363-260762958767611/AnsiballZ_file.py'
Dec 13 07:27:44 compute-0 sudo[218806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:45 compute-0 python3.9[218808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:45 compute-0 sudo[218806]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:45 compute-0 sudo[218958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmjxywhybmaelfuigxmrjkzlusiztmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610865.1706355-375-252060196683110/AnsiballZ_stat.py'
Dec 13 07:27:45 compute-0 sudo[218958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:45 compute-0 python3.9[218960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:45 compute-0 sudo[218958]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:45 compute-0 sudo[219036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zadezkonxaebjiqygddyscvpzwmvbmsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610865.1706355-375-252060196683110/AnsiballZ_file.py'
Dec 13 07:27:45 compute-0 sudo[219036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:45 compute-0 python3.9[219038]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:45 compute-0 sudo[219036]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:46 compute-0 sudo[219188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpndfmrmqlfewdfhzoafxgtiysmvzpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610865.9747987-387-54531116400443/AnsiballZ_systemd.py'
Dec 13 07:27:46 compute-0 sudo[219188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:46 compute-0 python3.9[219190]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:27:46 compute-0 systemd[1]: Reloading.
Dec 13 07:27:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:46 compute-0 systemd-sysv-generator[219215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:27:46 compute-0 systemd-rc-local-generator[219211]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:27:46 compute-0 sudo[219188]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:47 compute-0 sudo[219378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmocygpzxgmuugajtrkjlziplcaipcwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610866.8310628-395-281247797038893/AnsiballZ_stat.py'
Dec 13 07:27:47 compute-0 sudo[219378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:47 compute-0 python3.9[219380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:47 compute-0 sudo[219378]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:47 compute-0 sudo[219456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkggeoekkuxqqcdycigrelxqtllnrtvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610866.8310628-395-281247797038893/AnsiballZ_file.py'
Dec 13 07:27:47 compute-0 sudo[219456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:47 compute-0 ceph-mon[74928]: pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:47 compute-0 python3.9[219458]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:47 compute-0 sudo[219456]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:47 compute-0 sudo[219608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vowosovovwsgnbnbqgqejqlxehqkbyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610867.6625195-407-254171153008260/AnsiballZ_stat.py'
Dec 13 07:27:47 compute-0 sudo[219608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:47 compute-0 python3.9[219610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:48 compute-0 sudo[219608]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:48 compute-0 sudo[219686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eprbynaswvkzzscmruiiobtkniqgprhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610867.6625195-407-254171153008260/AnsiballZ_file.py'
Dec 13 07:27:48 compute-0 sudo[219686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:48 compute-0 python3.9[219688]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:48 compute-0 sudo[219686]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:27:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:27:48 compute-0 sudo[219838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnbwtfubbohoymxumlzczazltfhghwjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610868.447044-419-255982346886957/AnsiballZ_systemd.py'
Dec 13 07:27:48 compute-0 sudo[219838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:48 compute-0 python3.9[219840]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:27:48 compute-0 systemd[1]: Reloading.
Dec 13 07:27:48 compute-0 systemd-sysv-generator[219864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:27:48 compute-0 systemd-rc-local-generator[219860]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:27:49 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 07:27:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 07:27:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 07:27:49 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 07:27:49 compute-0 sudo[219838]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:49 compute-0 ceph-mon[74928]: pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:49 compute-0 sudo[220031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjamkkanrechrnuoklvhehxlccsabtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610869.472878-429-12783105027223/AnsiballZ_file.py'
Dec 13 07:27:49 compute-0 sudo[220031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:49 compute-0 python3.9[220033]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:49 compute-0 sudo[220031]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:50 compute-0 sudo[220183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oednmunpxmqdldbzybyirihstbmmlbqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610869.947117-437-229529941646148/AnsiballZ_stat.py'
Dec 13 07:27:50 compute-0 sudo[220183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:50 compute-0 python3.9[220185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:50 compute-0 sudo[220183]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:50 compute-0 sudo[220306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhfxrygpnsjwzelobjwansyszeypdmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610869.947117-437-229529941646148/AnsiballZ_copy.py'
Dec 13 07:27:50 compute-0 sudo[220306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:50 compute-0 python3.9[220308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610869.947117-437-229529941646148/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:50 compute-0 sudo[220306]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:51 compute-0 sudo[220458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqkbaqhswhbqpmhsydyqdfaquksofacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610870.943156-454-205282193995579/AnsiballZ_file.py'
Dec 13 07:27:51 compute-0 sudo[220458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:51 compute-0 python3.9[220460]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:27:51 compute-0 sudo[220458]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:51 compute-0 ceph-mon[74928]: pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:51 compute-0 sudo[220610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fguniaxpwcbfuvxmgpwrtkarmgslrzeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610871.4805546-462-14490369811053/AnsiballZ_stat.py'
Dec 13 07:27:51 compute-0 sudo[220610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:51 compute-0 python3.9[220612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:27:51 compute-0 sudo[220610]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:52 compute-0 sudo[220733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwkahwxfpdlpltblxojuqfridmmyuvcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610871.4805546-462-14490369811053/AnsiballZ_copy.py'
Dec 13 07:27:52 compute-0 sudo[220733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:52 compute-0 python3.9[220735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610871.4805546-462-14490369811053/.source.json _original_basename=.7iiheo2s follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:52 compute-0 sudo[220733]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:52 compute-0 sudo[220885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjbckthfzlostodgzjpiribjkafbpdat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610872.5020003-477-230718824817414/AnsiballZ_file.py'
Dec 13 07:27:52 compute-0 sudo[220885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:52 compute-0 python3.9[220887]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:27:52 compute-0 sudo[220885]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:53 compute-0 sudo[221037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkidvfmxiuvjzvbnnfixhnmwzcuqsgqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610873.013707-485-53397393826717/AnsiballZ_stat.py'
Dec 13 07:27:53 compute-0 sudo[221037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:53 compute-0 sudo[221037]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:53 compute-0 ceph-mon[74928]: pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:53 compute-0 sudo[221169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvdtcsmexzjutsixcfbtlaquljawbnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610873.013707-485-53397393826717/AnsiballZ_copy.py'
Dec 13 07:27:53 compute-0 sudo[221169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:53 compute-0 podman[221134]: 2025-12-13 07:27:53.619237013 +0000 UTC m=+0.056431793 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 07:27:53 compute-0 sudo[221183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:27:53 compute-0 sudo[221183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:53 compute-0 sudo[221183]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:53 compute-0 sudo[221211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:27:53 compute-0 sudo[221211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:53 compute-0 sudo[221169]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:54 compute-0 sudo[221211]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:27:54 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:27:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:27:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:27:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:27:54 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:27:54 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:27:54 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:27:54 compute-0 sudo[221341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:27:54 compute-0 sudo[221341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:54 compute-0 sudo[221341]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:54 compute-0 sudo[221383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:27:54 compute-0 sudo[221383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:54 compute-0 sudo[221464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqvxlkiuepceniadxtfzsndfyatmgun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610873.9793715-502-170216699924703/AnsiballZ_container_config_data.py'
Dec 13 07:27:54 compute-0 sudo[221464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.423154703 +0000 UTC m=+0.028470155 container create 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:27:54 compute-0 python3.9[221466]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 13 07:27:54 compute-0 sudo[221464]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:54 compute-0 systemd[1]: Started libpod-conmon-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope.
Dec 13 07:27:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.475024436 +0000 UTC m=+0.080339897 container init 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.480928589 +0000 UTC m=+0.086244039 container start 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.482076616 +0000 UTC m=+0.087392068 container attach 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:27:54 compute-0 hungry_lovelace[221491]: 167 167
Dec 13 07:27:54 compute-0 systemd[1]: libpod-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope: Deactivated successfully.
Dec 13 07:27:54 compute-0 conmon[221491]: conmon 0b31ba9d55a7b2a5574b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope/container/memory.events
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.485101334 +0000 UTC m=+0.090416786 container died 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 07:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5afdbc94c893fd414fec3d8a615358563ce9bddd9bc6af7e36cc87eb25f73bdd-merged.mount: Deactivated successfully.
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.504605619 +0000 UTC m=+0.109921070 container remove 0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:27:54 compute-0 podman[221477]: 2025-12-13 07:27:54.411502404 +0000 UTC m=+0.016817875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:27:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:27:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:27:54 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:27:54 compute-0 systemd[1]: libpod-conmon-0b31ba9d55a7b2a5574bcbf7fa0226d5708cdc3f29a9f89c247f64640640c4b1.scope: Deactivated successfully.
Dec 13 07:27:54 compute-0 podman[221545]: 2025-12-13 07:27:54.623605419 +0000 UTC m=+0.026827154 container create 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:27:54 compute-0 systemd[1]: Started libpod-conmon-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope.
Dec 13 07:27:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:54 compute-0 podman[221545]: 2025-12-13 07:27:54.675747836 +0000 UTC m=+0.078969560 container init 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:27:54 compute-0 podman[221545]: 2025-12-13 07:27:54.681019878 +0000 UTC m=+0.084241603 container start 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:27:54 compute-0 podman[221545]: 2025-12-13 07:27:54.682009589 +0000 UTC m=+0.085231314 container attach 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 07:27:54 compute-0 podman[221545]: 2025-12-13 07:27:54.612727435 +0000 UTC m=+0.015949180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:27:54 compute-0 sudo[221685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwrnuczvknuvyypisjazydaibzoguid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610874.6047244-511-90478318724096/AnsiballZ_container_config_hash.py'
Dec 13 07:27:54 compute-0 sudo[221685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:55 compute-0 crazy_nash[221602]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:27:55 compute-0 crazy_nash[221602]: --> All data devices are unavailable
Dec 13 07:27:55 compute-0 systemd[1]: libpod-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope: Deactivated successfully.
Dec 13 07:27:55 compute-0 conmon[221602]: conmon 756b8c15debe06033fb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope/container/memory.events
Dec 13 07:27:55 compute-0 podman[221545]: 2025-12-13 07:27:55.055687412 +0000 UTC m=+0.458909136 container died 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-69d016ae3c5fa208f8d5fe1954c23a3cdd8985a44b6c880ad113b1d96047d4e0-merged.mount: Deactivated successfully.
Dec 13 07:27:55 compute-0 python3.9[221689]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 07:27:55 compute-0 podman[221545]: 2025-12-13 07:27:55.093054484 +0000 UTC m=+0.496276210 container remove 756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:27:55 compute-0 sudo[221685]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:55 compute-0 systemd[1]: libpod-conmon-756b8c15debe06033fb3b9efa387931b2e4c227e4184ee5c861e387fbf9a84da.scope: Deactivated successfully.
Dec 13 07:27:55 compute-0 sudo[221383]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:55 compute-0 sudo[221727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:27:55 compute-0 sudo[221727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:55 compute-0 sudo[221727]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:55 compute-0 sudo[221756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:27:55 compute-0 sudo[221756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.424319617 +0000 UTC m=+0.026617712 container create ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:27:55 compute-0 systemd[1]: Started libpod-conmon-ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f.scope.
Dec 13 07:27:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.471185831 +0000 UTC m=+0.073483946 container init ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.478245005 +0000 UTC m=+0.080543099 container start ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.479468886 +0000 UTC m=+0.081766981 container attach ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 07:27:55 compute-0 quirky_cartwright[221863]: 167 167
Dec 13 07:27:55 compute-0 systemd[1]: libpod-ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f.scope: Deactivated successfully.
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.48172606 +0000 UTC m=+0.084024155 container died ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 07:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2939b91c450a5c5ca2684924eb5dd19bb91bf1e0672682bfac3191e41b96843a-merged.mount: Deactivated successfully.
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.498278725 +0000 UTC m=+0.100576820 container remove ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:27:55 compute-0 podman[221843]: 2025-12-13 07:27:55.413907749 +0000 UTC m=+0.016205864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:27:55 compute-0 ceph-mon[74928]: pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:55 compute-0 systemd[1]: libpod-conmon-ba519715a4f83f14268c54eec9fb1ca817bda1b0db7b6f135ca716dc02e3cd9f.scope: Deactivated successfully.
Dec 13 07:27:55 compute-0 sudo[221947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgshizzapjsojzleelentzaaxgdlpzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610875.2595227-520-278937695683871/AnsiballZ_podman_container_info.py'
Dec 13 07:27:55 compute-0 sudo[221947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.651894796 +0000 UTC m=+0.054162104 container create 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:27:55 compute-0 systemd[1]: Started libpod-conmon-144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458.scope.
Dec 13 07:27:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.708236568 +0000 UTC m=+0.110503876 container init 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.714093951 +0000 UTC m=+0.116361260 container start 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.71551325 +0000 UTC m=+0.117780559 container attach 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.641322356 +0000 UTC m=+0.043589685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:27:55 compute-0 python3.9[221952]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 07:27:55 compute-0 sudo[221947]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]: {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:     "0": [
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:         {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "devices": [
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "/dev/loop3"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             ],
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_name": "ceph_lv0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_size": "21470642176",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "name": "ceph_lv0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "tags": {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cluster_name": "ceph",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.crush_device_class": "",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.encrypted": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.objectstore": "bluestore",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osd_id": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.type": "block",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.vdo": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.with_tpm": "0"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             },
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "type": "block",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "vg_name": "ceph_vg0"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:         }
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:     ],
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:     "1": [
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:         {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "devices": [
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "/dev/loop4"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             ],
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_name": "ceph_lv1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_size": "21470642176",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "name": "ceph_lv1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "tags": {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cluster_name": "ceph",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.crush_device_class": "",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.encrypted": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.objectstore": "bluestore",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osd_id": "1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.type": "block",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.vdo": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.with_tpm": "0"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             },
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "type": "block",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "vg_name": "ceph_vg1"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:         }
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:     ],
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:     "2": [
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:         {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "devices": [
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "/dev/loop5"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             ],
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_name": "ceph_lv2",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_size": "21470642176",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "name": "ceph_lv2",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "tags": {
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.cluster_name": "ceph",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.crush_device_class": "",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.encrypted": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.objectstore": "bluestore",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osd_id": "2",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.type": "block",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.vdo": "0",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:                 "ceph.with_tpm": "0"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             },
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "type": "block",
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:             "vg_name": "ceph_vg2"
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:         }
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]:     ]
Dec 13 07:27:55 compute-0 frosty_wilbur[221968]: }
Dec 13 07:27:55 compute-0 systemd[1]: libpod-144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458.scope: Deactivated successfully.
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.950921218 +0000 UTC m=+0.353188527 container died 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 07:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8edd6f4d35cda917a9d432e5e649d40e3178be7c933e35eacf6b7a61d9689def-merged.mount: Deactivated successfully.
Dec 13 07:27:55 compute-0 podman[221955]: 2025-12-13 07:27:55.973198858 +0000 UTC m=+0.375466166 container remove 144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:27:55 compute-0 systemd[1]: libpod-conmon-144ba1a18220709b20b9c946ad4690fe9f39bec0ec9b80f7df61f6084546b458.scope: Deactivated successfully.
Dec 13 07:27:56 compute-0 sudo[221756]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:56 compute-0 sudo[222030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:27:56 compute-0 sudo[222030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:56 compute-0 sudo[222030]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:56 compute-0 sudo[222055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:27:56 compute-0 sudo[222055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.305837101 +0000 UTC m=+0.028277021 container create 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:27:56 compute-0 systemd[1]: Started libpod-conmon-1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d.scope.
Dec 13 07:27:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.365926681 +0000 UTC m=+0.088366591 container init 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.370228139 +0000 UTC m=+0.092668049 container start 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:27:56 compute-0 frosty_newton[222103]: 167 167
Dec 13 07:27:56 compute-0 systemd[1]: libpod-1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d.scope: Deactivated successfully.
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.374045777 +0000 UTC m=+0.096485697 container attach 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.374676133 +0000 UTC m=+0.097116044 container died 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1528fbf70bf7d0d07aaefb013317ffcf72fb6c98db6244495f22ae9203940205-merged.mount: Deactivated successfully.
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.29438604 +0000 UTC m=+0.016825970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:27:56 compute-0 podman[222090]: 2025-12-13 07:27:56.392418113 +0000 UTC m=+0.114858023 container remove 1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:27:56 compute-0 systemd[1]: libpod-conmon-1003a1b793056f4e15c164e9a4655fd7f2bc27ec9b2913ee13137308be8b420d.scope: Deactivated successfully.
Dec 13 07:27:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:56 compute-0 podman[222133]: 2025-12-13 07:27:56.509934705 +0000 UTC m=+0.026076704 container create 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:27:56 compute-0 systemd[1]: Started libpod-conmon-2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a.scope.
Dec 13 07:27:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:27:56 compute-0 podman[222133]: 2025-12-13 07:27:56.572079649 +0000 UTC m=+0.088221667 container init 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:27:56 compute-0 podman[222133]: 2025-12-13 07:27:56.577919821 +0000 UTC m=+0.094061820 container start 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:27:56 compute-0 podman[222133]: 2025-12-13 07:27:56.579086765 +0000 UTC m=+0.095228784 container attach 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:27:56 compute-0 podman[222133]: 2025-12-13 07:27:56.500195863 +0000 UTC m=+0.016337882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:27:56 compute-0 sudo[222277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyystapollvfwyyfrncyymvrsqdxgqpg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610876.491041-533-134886467950706/AnsiballZ_edpm_container_manage.py'
Dec 13 07:27:56 compute-0 sudo[222277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:57 compute-0 python3[222279]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 07:27:57 compute-0 lvm[222362]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:27:57 compute-0 lvm[222361]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:27:57 compute-0 lvm[222361]: VG ceph_vg0 finished
Dec 13 07:27:57 compute-0 lvm[222362]: VG ceph_vg1 finished
Dec 13 07:27:57 compute-0 lvm[222365]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:27:57 compute-0 lvm[222365]: VG ceph_vg2 finished
Dec 13 07:27:57 compute-0 dreamy_curran[222190]: {}
Dec 13 07:27:57 compute-0 lvm[222367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:27:57 compute-0 lvm[222367]: VG ceph_vg0 finished
Dec 13 07:27:57 compute-0 systemd[1]: libpod-2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a.scope: Deactivated successfully.
Dec 13 07:27:57 compute-0 podman[222133]: 2025-12-13 07:27:57.226419031 +0000 UTC m=+0.742561030 container died 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-23e2838ca76fda98571ae70e876789f5799b4b77f357f8580335e016ad39ebd1-merged.mount: Deactivated successfully.
Dec 13 07:27:57 compute-0 podman[222133]: 2025-12-13 07:27:57.252501426 +0000 UTC m=+0.768643425 container remove 2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:27:57 compute-0 systemd[1]: libpod-conmon-2fe7717ee0232ff16644bb3f333235f9bb6a97cb7ce961afb4ecf41594b8821a.scope: Deactivated successfully.
Dec 13 07:27:57 compute-0 sudo[222055]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:27:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:27:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:27:57 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:27:57 compute-0 sudo[222377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:27:57 compute-0 sudo[222377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:27:57 compute-0 sudo[222377]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:57 compute-0 ceph-mon[74928]: pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:27:57 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:27:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:27:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:58 compute-0 podman[222345]: 2025-12-13 07:27:58.901983166 +0000 UTC m=+1.804118171 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 07:27:58 compute-0 podman[222439]: 2025-12-13 07:27:58.997056327 +0000 UTC m=+0.027861841 container create f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 07:27:58 compute-0 podman[222439]: 2025-12-13 07:27:58.984015946 +0000 UTC m=+0.014821460 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 07:27:59 compute-0 python3[222279]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 07:27:59 compute-0 sudo[222277]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:59 compute-0 sudo[222617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gntzlgdhptsmdxcyyvpsfsbdhfxyhzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610879.2054713-541-98132244909168/AnsiballZ_stat.py'
Dec 13 07:27:59 compute-0 sudo[222617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:27:59 compute-0 ceph-mon[74928]: pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:27:59 compute-0 python3.9[222619]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:27:59 compute-0 sudo[222617]: pam_unix(sudo:session): session closed for user root
Dec 13 07:27:59 compute-0 sudo[222771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmesxchhjkabtsgpzwxddupgqgaphyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610879.735432-550-152766532806866/AnsiballZ_file.py'
Dec 13 07:27:59 compute-0 sudo[222771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:00 compute-0 python3.9[222773]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:00 compute-0 sudo[222771]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:00 compute-0 sudo[222847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klnpzvwkgysooivyhrdpgevulwnrdzub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610879.735432-550-152766532806866/AnsiballZ_stat.py'
Dec 13 07:28:00 compute-0 sudo[222847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:00 compute-0 python3.9[222849]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:28:00 compute-0 sudo[222847]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:00 compute-0 sudo[222998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqwetwvevyqssmxhkkqawskhcdesepm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610880.4340858-550-196929256557919/AnsiballZ_copy.py'
Dec 13 07:28:00 compute-0 sudo[222998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:00 compute-0 python3.9[223000]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610880.4340858-550-196929256557919/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:01 compute-0 sudo[222998]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:01 compute-0 sudo[223074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siunjrwtwktbvixmnochzobbscfiacni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610880.4340858-550-196929256557919/AnsiballZ_systemd.py'
Dec 13 07:28:01 compute-0 sudo[223074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:01 compute-0 python3.9[223076]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:28:01 compute-0 systemd[1]: Reloading.
Dec 13 07:28:01 compute-0 systemd-sysv-generator[223100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:01 compute-0 systemd-rc-local-generator[223097]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:01 compute-0 ceph-mon[74928]: pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:01 compute-0 sudo[223074]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:01 compute-0 sudo[223185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaobtdbthqtiuhpbwdselshzngunhmvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610880.4340858-550-196929256557919/AnsiballZ_systemd.py'
Dec 13 07:28:01 compute-0 sudo[223185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:02 compute-0 python3.9[223187]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:02 compute-0 systemd[1]: Reloading.
Dec 13 07:28:02 compute-0 systemd-sysv-generator[223213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:02 compute-0 systemd-rc-local-generator[223210]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:02 compute-0 systemd[1]: Starting multipathd container...
Dec 13 07:28:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 07:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 07:28:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.
Dec 13 07:28:02 compute-0 podman[223227]: 2025-12-13 07:28:02.488238865 +0000 UTC m=+0.082340820 container init f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:28:02 compute-0 multipathd[223239]: + sudo -E kolla_set_configs
Dec 13 07:28:02 compute-0 podman[223227]: 2025-12-13 07:28:02.5067429 +0000 UTC m=+0.100844834 container start f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 07:28:02 compute-0 podman[223227]: multipathd
Dec 13 07:28:02 compute-0 systemd[1]: Started multipathd container.
Dec 13 07:28:02 compute-0 sudo[223245]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 13 07:28:02 compute-0 sudo[223245]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 07:28:02 compute-0 sudo[223245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 07:28:02 compute-0 sudo[223185]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:02 compute-0 multipathd[223239]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 07:28:02 compute-0 multipathd[223239]: INFO:__main__:Validating config file
Dec 13 07:28:02 compute-0 multipathd[223239]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 07:28:02 compute-0 multipathd[223239]: INFO:__main__:Writing out command to execute
Dec 13 07:28:02 compute-0 sudo[223245]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:02 compute-0 multipathd[223239]: ++ cat /run_command
Dec 13 07:28:02 compute-0 multipathd[223239]: + CMD='/usr/sbin/multipathd -d'
Dec 13 07:28:02 compute-0 multipathd[223239]: + ARGS=
Dec 13 07:28:02 compute-0 multipathd[223239]: + sudo kolla_copy_cacerts
Dec 13 07:28:02 compute-0 sudo[223279]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 13 07:28:02 compute-0 sudo[223279]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 07:28:02 compute-0 sudo[223279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 07:28:02 compute-0 sudo[223279]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:02 compute-0 multipathd[223239]: + [[ ! -n '' ]]
Dec 13 07:28:02 compute-0 multipathd[223239]: + . kolla_extend_start
Dec 13 07:28:02 compute-0 multipathd[223239]: Running command: '/usr/sbin/multipathd -d'
Dec 13 07:28:02 compute-0 multipathd[223239]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 07:28:02 compute-0 multipathd[223239]: + umask 0022
Dec 13 07:28:02 compute-0 multipathd[223239]: + exec /usr/sbin/multipathd -d
Dec 13 07:28:02 compute-0 multipathd[223239]: 2756.184926 | --------start up--------
Dec 13 07:28:02 compute-0 multipathd[223239]: 2756.184938 | read /etc/multipath.conf
Dec 13 07:28:02 compute-0 podman[223246]: 2025-12-13 07:28:02.590046739 +0000 UTC m=+0.075567903 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 07:28:02 compute-0 multipathd[223239]: 2756.188612 | path checkers start up
Dec 13 07:28:02 compute-0 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-4328e0a6a7ac1fb0.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 07:28:02 compute-0 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-4328e0a6a7ac1fb0.service: Failed with result 'exit-code'.
Dec 13 07:28:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:02 compute-0 python3.9[223425]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:28:03 compute-0 sudo[223577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqnjgdzanyixgyondsllcgihlfeqxtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610883.127731-586-19881778657401/AnsiballZ_command.py'
Dec 13 07:28:03 compute-0 sudo[223577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:03 compute-0 python3.9[223579]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:03 compute-0 sudo[223577]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:03 compute-0 ceph-mon[74928]: pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:03 compute-0 sudo[223738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdjwyqhtbqvmiqicjjbpbrqkljhcsmhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610883.6445081-594-191285183355102/AnsiballZ_systemd.py'
Dec 13 07:28:03 compute-0 sudo[223738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:04 compute-0 python3.9[223740]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:28:04 compute-0 systemd[1]: Stopping multipathd container...
Dec 13 07:28:04 compute-0 multipathd[223239]: 2757.729018 | exit (signal)
Dec 13 07:28:04 compute-0 multipathd[223239]: 2757.729067 | --------shut down-------
Dec 13 07:28:04 compute-0 systemd[1]: libpod-f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.scope: Deactivated successfully.
Dec 13 07:28:04 compute-0 conmon[223239]: conmon f696b337a701eeb12548 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.scope/container/memory.events
Dec 13 07:28:04 compute-0 podman[223744]: 2025-12-13 07:28:04.165125559 +0000 UTC m=+0.056709544 container died f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 13 07:28:04 compute-0 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-4328e0a6a7ac1fb0.timer: Deactivated successfully.
Dec 13 07:28:04 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.
Dec 13 07:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254-merged.mount: Deactivated successfully.
Dec 13 07:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-userdata-shm.mount: Deactivated successfully.
Dec 13 07:28:04 compute-0 podman[223744]: 2025-12-13 07:28:04.24670215 +0000 UTC m=+0.138286135 container cleanup f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 07:28:04 compute-0 podman[223744]: multipathd
Dec 13 07:28:04 compute-0 podman[223766]: multipathd
Dec 13 07:28:04 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 13 07:28:04 compute-0 systemd[1]: Stopped multipathd container.
Dec 13 07:28:04 compute-0 systemd[1]: Starting multipathd container...
Dec 13 07:28:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 07:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c900ea433a65f9109f55cb6fe57286edb5b6f4dee643c19d1929e16cfeec254/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 07:28:04 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6.
Dec 13 07:28:04 compute-0 podman[223775]: 2025-12-13 07:28:04.385999266 +0000 UTC m=+0.075816672 container init f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 07:28:04 compute-0 multipathd[223786]: + sudo -E kolla_set_configs
Dec 13 07:28:04 compute-0 podman[223775]: 2025-12-13 07:28:04.403074253 +0000 UTC m=+0.092891649 container start f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 07:28:04 compute-0 podman[223775]: multipathd
Dec 13 07:28:04 compute-0 sudo[223793]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 13 07:28:04 compute-0 sudo[223793]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 07:28:04 compute-0 sudo[223793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 07:28:04 compute-0 systemd[1]: Started multipathd container.
Dec 13 07:28:04 compute-0 sudo[223738]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:04 compute-0 multipathd[223786]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 07:28:04 compute-0 multipathd[223786]: INFO:__main__:Validating config file
Dec 13 07:28:04 compute-0 multipathd[223786]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 07:28:04 compute-0 multipathd[223786]: INFO:__main__:Writing out command to execute
Dec 13 07:28:04 compute-0 sudo[223793]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:04 compute-0 multipathd[223786]: ++ cat /run_command
Dec 13 07:28:04 compute-0 multipathd[223786]: + CMD='/usr/sbin/multipathd -d'
Dec 13 07:28:04 compute-0 multipathd[223786]: + ARGS=
Dec 13 07:28:04 compute-0 multipathd[223786]: + sudo kolla_copy_cacerts
Dec 13 07:28:04 compute-0 podman[223794]: 2025-12-13 07:28:04.456128394 +0000 UTC m=+0.046975442 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 07:28:04 compute-0 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-407c9b11ead522c2.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 07:28:04 compute-0 systemd[1]: f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6-407c9b11ead522c2.service: Failed with result 'exit-code'.
Dec 13 07:28:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:04 compute-0 sudo[223818]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 13 07:28:04 compute-0 sudo[223818]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 07:28:04 compute-0 sudo[223818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 07:28:04 compute-0 sudo[223818]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:04 compute-0 multipathd[223786]: + [[ ! -n '' ]]
Dec 13 07:28:04 compute-0 multipathd[223786]: + . kolla_extend_start
Dec 13 07:28:04 compute-0 multipathd[223786]: Running command: '/usr/sbin/multipathd -d'
Dec 13 07:28:04 compute-0 multipathd[223786]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 07:28:04 compute-0 multipathd[223786]: + umask 0022
Dec 13 07:28:04 compute-0 multipathd[223786]: + exec /usr/sbin/multipathd -d
Dec 13 07:28:04 compute-0 multipathd[223786]: 2758.075287 | --------start up--------
Dec 13 07:28:04 compute-0 multipathd[223786]: 2758.075300 | read /etc/multipath.conf
Dec 13 07:28:04 compute-0 multipathd[223786]: 2758.079325 | path checkers start up
Dec 13 07:28:04 compute-0 sudo[223973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyudazeupcopidbnehwkoubysinlmjkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610884.5583835-602-165741152325788/AnsiballZ_file.py'
Dec 13 07:28:04 compute-0 sudo[223973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:04 compute-0 python3.9[223975]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:04 compute-0 sudo[223973]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:05 compute-0 sudo[224125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpchqntnhezelgjqezofbivkvxpxtiah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610885.1738312-614-28303955362626/AnsiballZ_file.py'
Dec 13 07:28:05 compute-0 sudo[224125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:05 compute-0 python3.9[224127]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 07:28:05 compute-0 sudo[224125]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:05 compute-0 ceph-mon[74928]: pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:05 compute-0 sudo[224277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrdbbkglvnhumntpnouoiczptmpoqdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610885.6354923-622-260144481189380/AnsiballZ_modprobe.py'
Dec 13 07:28:05 compute-0 sudo[224277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:05 compute-0 python3.9[224279]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 13 07:28:05 compute-0 kernel: Key type psk registered
Dec 13 07:28:06 compute-0 sudo[224277]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:06 compute-0 sudo[224440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkslxtdpjmlxamhksnetcjanjdnffgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610886.140287-630-186777551401996/AnsiballZ_stat.py'
Dec 13 07:28:06 compute-0 sudo[224440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:06 compute-0 python3.9[224442]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:06 compute-0 sudo[224440]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:06 compute-0 sudo[224563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hommalejifdsuepygqxbejrfmmbkbmuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610886.140287-630-186777551401996/AnsiballZ_copy.py'
Dec 13 07:28:06 compute-0 sudo[224563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:06 compute-0 python3.9[224565]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610886.140287-630-186777551401996/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:06 compute-0 sudo[224563]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:07 compute-0 sudo[224715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajudihwfmwsttgpilyfaeakzfcesfkqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610887.0371985-646-220107367444296/AnsiballZ_lineinfile.py'
Dec 13 07:28:07 compute-0 sudo[224715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:07 compute-0 python3.9[224717]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:07 compute-0 sudo[224715]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:07 compute-0 ceph-mon[74928]: pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:07 compute-0 sudo[224867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhfqtywusznxuvpxykkulkcjuhzaynv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610887.548323-654-271044818968604/AnsiballZ_systemd.py'
Dec 13 07:28:07 compute-0 sudo[224867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:08 compute-0 python3.9[224869]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:28:08 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 07:28:08 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 13 07:28:08 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 13 07:28:08 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 13 07:28:08 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 13 07:28:08 compute-0 sudo[224867]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:08 compute-0 sudo[225023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthmsqjsesfgnafvwtzbyypsfyyyxdbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610888.2603183-662-259268219470858/AnsiballZ_dnf.py'
Dec 13 07:28:08 compute-0 sudo[225023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:08 compute-0 python3.9[225025]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 07:28:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:28:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:28:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:28:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:28:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:28:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:28:09 compute-0 ceph-mon[74928]: pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:10 compute-0 systemd[1]: Reloading.
Dec 13 07:28:10 compute-0 systemd-rc-local-generator[225061]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:10 compute-0 systemd-sysv-generator[225064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:10 compute-0 systemd[1]: Reloading.
Dec 13 07:28:10 compute-0 systemd-rc-local-generator[225086]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:10 compute-0 systemd-sysv-generator[225089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:11 compute-0 systemd-logind[745]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 07:28:11 compute-0 systemd-logind[745]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 07:28:11 compute-0 lvm[225137]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:28:11 compute-0 lvm[225135]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:28:11 compute-0 lvm[225135]: VG ceph_vg2 finished
Dec 13 07:28:11 compute-0 lvm[225134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:28:11 compute-0 lvm[225134]: VG ceph_vg0 finished
Dec 13 07:28:11 compute-0 lvm[225137]: VG ceph_vg1 finished
Dec 13 07:28:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 07:28:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 07:28:11 compute-0 systemd[1]: Reloading.
Dec 13 07:28:11 compute-0 systemd-rc-local-generator[225183]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:11 compute-0 systemd-sysv-generator[225188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:11 compute-0 ceph-mon[74928]: pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 07:28:11 compute-0 sudo[225023]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:12 compute-0 sudo[226485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdcydvdljytpexrttnukdrfyvrqmpamb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610892.1034799-670-187766899464050/AnsiballZ_systemd_service.py'
Dec 13 07:28:12 compute-0 sudo[226485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 07:28:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 07:28:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.044s CPU time.
Dec 13 07:28:12 compute-0 systemd[1]: run-r0ca2529a3429451fbec8a693ae3bacf2.service: Deactivated successfully.
Dec 13 07:28:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:12 compute-0 python3.9[226487]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:28:12 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 13 07:28:12 compute-0 iscsid[214351]: iscsid shutting down.
Dec 13 07:28:12 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 07:28:12 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 13 07:28:12 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 07:28:12 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 13 07:28:12 compute-0 systemd[1]: Started Open-iSCSI.
Dec 13 07:28:12 compute-0 sudo[226485]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:13 compute-0 python3.9[226642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 07:28:13 compute-0 ceph-mon[74928]: pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:13 compute-0 sudo[226796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbmqfvchhwgxzwmvodgsxtgujtttvamw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610893.4957483-688-249491286073653/AnsiballZ_file.py'
Dec 13 07:28:13 compute-0 sudo[226796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:13 compute-0 python3.9[226798]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:13 compute-0 sudo[226796]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:14 compute-0 sudo[226957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnbcaxinfmxlhzbmwujturwdmzwrtqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610894.0880344-699-242538251441300/AnsiballZ_systemd_service.py'
Dec 13 07:28:14 compute-0 sudo[226957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:14 compute-0 podman[226922]: 2025-12-13 07:28:14.281927133 +0000 UTC m=+0.041756527 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 07:28:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:14 compute-0 python3.9[226964]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:28:14 compute-0 systemd[1]: Reloading.
Dec 13 07:28:14 compute-0 systemd-rc-local-generator[226986]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:14 compute-0 systemd-sysv-generator[226991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:14 compute-0 sudo[226957]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:15 compute-0 python3.9[227151]: ansible-ansible.builtin.service_facts Invoked
Dec 13 07:28:15 compute-0 network[227168]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 07:28:15 compute-0 network[227169]: 'network-scripts' will be removed from distribution in near future.
Dec 13 07:28:15 compute-0 network[227170]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 07:28:15 compute-0 ceph-mon[74928]: pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:17 compute-0 ceph-mon[74928]: pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:17 compute-0 sudo[227443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmmdgkobggaljgrdklqkjostkmiuubf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610897.4949648-718-140279418005361/AnsiballZ_systemd_service.py'
Dec 13 07:28:17 compute-0 sudo[227443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:17 compute-0 python3.9[227445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:17 compute-0 sudo[227443]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:18 compute-0 sudo[227596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orcsgxpwzietorwtcmzpxkggufswmblp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610898.0487335-718-35282599864577/AnsiballZ_systemd_service.py'
Dec 13 07:28:18 compute-0 sudo[227596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:18 compute-0 python3.9[227598]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:18 compute-0 sudo[227596]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:18 compute-0 sudo[227749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpmnhrdeefcxiaqoawepnjxtjasvwtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610898.5757914-718-36639671233205/AnsiballZ_systemd_service.py'
Dec 13 07:28:18 compute-0 sudo[227749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:18 compute-0 python3.9[227751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:19 compute-0 sudo[227749]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:19 compute-0 sudo[227902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnqpbirsbifzgyxkwelfwgyaxcnubmts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610899.0890923-718-50598914566664/AnsiballZ_systemd_service.py'
Dec 13 07:28:19 compute-0 sudo[227902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:19 compute-0 python3.9[227904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:19 compute-0 sudo[227902]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:19 compute-0 ceph-mon[74928]: pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:19 compute-0 sudo[228055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szocdiqbdhcajphorderpubgtrmzmjci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610899.608177-718-258376639993144/AnsiballZ_systemd_service.py'
Dec 13 07:28:19 compute-0 sudo[228055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:20 compute-0 python3.9[228057]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:20 compute-0 sudo[228055]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:20 compute-0 sudo[228208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-terxmhxsqdjgxonttdehpzqhwwsgozaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610900.118672-718-57177459333210/AnsiballZ_systemd_service.py'
Dec 13 07:28:20 compute-0 sudo[228208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:20 compute-0 python3.9[228210]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:20 compute-0 sudo[228208]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:20 compute-0 sudo[228361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwfqzdsbmrebdkrsisxgpdwdchpudsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610900.6312838-718-64545890312274/AnsiballZ_systemd_service.py'
Dec 13 07:28:20 compute-0 sudo[228361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:21 compute-0 python3.9[228363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:21 compute-0 sudo[228361]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:21 compute-0 sudo[228514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmmdybmmbxqhzbfsyykopocunubdhgrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610901.1423419-718-7056524534096/AnsiballZ_systemd_service.py'
Dec 13 07:28:21 compute-0 sudo[228514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:21 compute-0 python3.9[228516]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:28:21 compute-0 ceph-mon[74928]: pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:21 compute-0 sudo[228514]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:21 compute-0 sudo[228667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqrsjpesgnbadtxckzotwwxwraboxisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610901.7900403-777-114916146944083/AnsiballZ_file.py'
Dec 13 07:28:21 compute-0 sudo[228667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:22 compute-0 python3.9[228669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:22 compute-0 sudo[228667]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:22 compute-0 sudo[228819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvbomipgrqjuuudxfwprmplsdvenevm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610902.1951036-777-146373126752569/AnsiballZ_file.py'
Dec 13 07:28:22 compute-0 sudo[228819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:22 compute-0 python3.9[228821]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:22 compute-0 sudo[228819]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:22 compute-0 sudo[228971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujypbjgvqnywevzbrhgbpeecgqzpcjfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610902.5941372-777-84605664587911/AnsiballZ_file.py'
Dec 13 07:28:22 compute-0 sudo[228971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:22 compute-0 python3.9[228973]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:22 compute-0 sudo[228971]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:23 compute-0 sudo[229123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndlgkzsotqbjdbzsgzigfbgmlsxdkrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610902.9927785-777-112262047598763/AnsiballZ_file.py'
Dec 13 07:28:23 compute-0 sudo[229123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:23 compute-0 python3.9[229125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:23 compute-0 sudo[229123]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:23 compute-0 sudo[229275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdhxdfceurdvwyroagohrixvzrnbxfvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610903.3913517-777-159236593362761/AnsiballZ_file.py'
Dec 13 07:28:23 compute-0 sudo[229275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:23 compute-0 ceph-mon[74928]: pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:23 compute-0 python3.9[229277]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:23 compute-0 podman[229278]: 2025-12-13 07:28:23.714523569 +0000 UTC m=+0.053831692 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 07:28:23 compute-0 sudo[229275]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:23 compute-0 sudo[229450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtauyisavpnlwnyitoefawbqzfqelkcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610903.8034124-777-176532816310483/AnsiballZ_file.py'
Dec 13 07:28:23 compute-0 sudo[229450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:24 compute-0 python3.9[229452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:24 compute-0 sudo[229450]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:24 compute-0 sudo[229602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmzxakepgxyzrdbnsstcyyrtisepfwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610904.227557-777-232374439132632/AnsiballZ_file.py'
Dec 13 07:28:24 compute-0 sudo[229602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:24 compute-0 python3.9[229604]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:24 compute-0 sudo[229602]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:24 compute-0 sudo[229754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpcoterthogetrmzlxurpdidkezhnwnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610904.647043-777-127509160973597/AnsiballZ_file.py'
Dec 13 07:28:24 compute-0 sudo[229754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:24 compute-0 python3.9[229756]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:24 compute-0 sudo[229754]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:25 compute-0 sudo[229906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdzofpydszplysbszdveeximojebegi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610905.1050336-834-139135593826311/AnsiballZ_file.py'
Dec 13 07:28:25 compute-0 sudo[229906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:25 compute-0 python3.9[229908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:25 compute-0 sudo[229906]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:25 compute-0 ceph-mon[74928]: pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:25 compute-0 sudo[230058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzkjqezarjttvsrtwykbxrlgpfivzfym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610905.5290143-834-4479747166649/AnsiballZ_file.py'
Dec 13 07:28:25 compute-0 sudo[230058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:25 compute-0 python3.9[230060]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:25 compute-0 sudo[230058]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:26 compute-0 sudo[230210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yalpltegchdfcqiwetmojbiywixcvpia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610905.946118-834-62509440349863/AnsiballZ_file.py'
Dec 13 07:28:26 compute-0 sudo[230210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:26 compute-0 python3.9[230212]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:26 compute-0 sudo[230210]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:26 compute-0 sudo[230362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzctwsnjzsaislfahnyfmbufdkrhtcvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610906.3488138-834-84266271131214/AnsiballZ_file.py'
Dec 13 07:28:26 compute-0 sudo[230362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:26 compute-0 python3.9[230364]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:26 compute-0 sudo[230362]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:26 compute-0 sudo[230514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxctkztyhgncissozatabzbwrllpudsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610906.7564354-834-236768750180237/AnsiballZ_file.py'
Dec 13 07:28:26 compute-0 sudo[230514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:27 compute-0 python3.9[230516]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:27 compute-0 sudo[230514]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:27 compute-0 sudo[230666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laxhcmogrjbehbkzokexjofslyfcnxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610907.1698685-834-178039195697705/AnsiballZ_file.py'
Dec 13 07:28:27 compute-0 sudo[230666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:27 compute-0 python3.9[230668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:27 compute-0 sudo[230666]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:27 compute-0 ceph-mon[74928]: pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:27 compute-0 sudo[230818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzelbwdjgdozbxgwmcqrcxbzrylqobfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610907.5836763-834-81869842310922/AnsiballZ_file.py'
Dec 13 07:28:27 compute-0 sudo[230818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:27 compute-0 python3.9[230820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:27 compute-0 sudo[230818]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:28 compute-0 sudo[230970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peaefizwyznwzbiyxaddnihgyynrbveq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610907.987251-834-36655025531034/AnsiballZ_file.py'
Dec 13 07:28:28 compute-0 sudo[230970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:28 compute-0 python3.9[230972]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:28 compute-0 sudo[230970]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:28 compute-0 sudo[231122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lguzjpjemjhxudungtgzklvlniwipwpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610908.5950215-892-152185969883615/AnsiballZ_command.py'
Dec 13 07:28:28 compute-0 sudo[231122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:28 compute-0 python3.9[231124]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:28 compute-0 sudo[231122]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:29 compute-0 python3.9[231276]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 07:28:29 compute-0 ceph-mon[74928]: pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:29 compute-0 sudo[231426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmiagfiehrgxnjavkgduqyblensxhpha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610909.6711347-910-23197996759358/AnsiballZ_systemd_service.py'
Dec 13 07:28:29 compute-0 sudo[231426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:30 compute-0 python3.9[231428]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:28:30 compute-0 systemd[1]: Reloading.
Dec 13 07:28:30 compute-0 systemd-rc-local-generator[231449]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:28:30 compute-0 systemd-sysv-generator[231452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:28:30 compute-0 sudo[231426]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:30 compute-0 sudo[231614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgijdweublloecwniuzqbvekzjgvjday ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610910.481484-918-258572752641730/AnsiballZ_command.py'
Dec 13 07:28:30 compute-0 sudo[231614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:30 compute-0 python3.9[231616]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:30 compute-0 sudo[231614]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:31 compute-0 sudo[231767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qghelyfdhndseaholagxvibvvicgjxjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610910.9292364-918-28068701110038/AnsiballZ_command.py'
Dec 13 07:28:31 compute-0 sudo[231767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:31 compute-0 python3.9[231769]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:31 compute-0 sudo[231767]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:31 compute-0 sudo[231920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqimnirxsamyhatwbmhmbsyscnhtbvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610911.3569586-918-47936992114766/AnsiballZ_command.py'
Dec 13 07:28:31 compute-0 sudo[231920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:31 compute-0 ceph-mon[74928]: pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:31 compute-0 python3.9[231922]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:31 compute-0 sudo[231920]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:31 compute-0 sudo[232073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrtcwxufamelblqiiyrgpkgaehbsczz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610911.788823-918-36875949422861/AnsiballZ_command.py'
Dec 13 07:28:31 compute-0 sudo[232073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:32 compute-0 python3.9[232075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:32 compute-0 sudo[232073]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:32 compute-0 sudo[232226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufwtmhjjrjnpchzrkilfsvxclwtlwyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610912.2214797-918-281017345759040/AnsiballZ_command.py'
Dec 13 07:28:32 compute-0 sudo[232226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:32 compute-0 python3.9[232228]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:32 compute-0 sudo[232226]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:32 compute-0 sudo[232379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scfrqvpfbqrrjlrpymdzxstpxbwdukyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610912.6570258-918-274053997284366/AnsiballZ_command.py'
Dec 13 07:28:32 compute-0 sudo[232379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:32 compute-0 python3.9[232381]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:33 compute-0 sudo[232379]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:33 compute-0 sudo[232532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxghpgkbyadsrzeevgmrwrmkvndmxgtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610913.0846207-918-16785171024534/AnsiballZ_command.py'
Dec 13 07:28:33 compute-0 sudo[232532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:33 compute-0 python3.9[232534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:33 compute-0 sudo[232532]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:33 compute-0 ceph-mon[74928]: pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:33 compute-0 sudo[232685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqohpwsscffynqijonoyuvqenrbtrlur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610913.5236082-918-256698108360184/AnsiballZ_command.py'
Dec 13 07:28:33 compute-0 sudo[232685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:33 compute-0 python3.9[232687]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 07:28:33 compute-0 sudo[232685]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:34 compute-0 podman[232765]: 2025-12-13 07:28:34.702694564 +0000 UTC m=+0.043036091 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 07:28:34 compute-0 sudo[232855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnoayxarfsjtfshmhxlvolcpmqvizdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610914.5894277-997-233447966176825/AnsiballZ_file.py'
Dec 13 07:28:34 compute-0 sudo[232855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:34 compute-0 python3.9[232857]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:34 compute-0 sudo[232855]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:35 compute-0 sudo[233007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwehilpslwfmesfgibfqfpjakrlgoss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610915.0233533-997-64448062414797/AnsiballZ_file.py'
Dec 13 07:28:35 compute-0 sudo[233007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:35 compute-0 python3.9[233009]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:35 compute-0 sudo[233007]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:35 compute-0 ceph-mon[74928]: pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:35 compute-0 sudo[233159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklkikcjelughlnnfykmbudrdtpyfbzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610915.447356-997-91533677289450/AnsiballZ_file.py'
Dec 13 07:28:35 compute-0 sudo[233159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:35 compute-0 python3.9[233161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:35 compute-0 sudo[233159]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:36 compute-0 sudo[233311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkvgsjiconmmdokzzjjbdjkjuzblciz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610915.902855-1019-137518652622246/AnsiballZ_file.py'
Dec 13 07:28:36 compute-0 sudo[233311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:36 compute-0 python3.9[233313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:36 compute-0 sudo[233311]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:36 compute-0 sudo[233463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifqoyiprtypmvyfmrcavqfjyjwgucly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610916.3274934-1019-482501475670/AnsiballZ_file.py'
Dec 13 07:28:36 compute-0 sudo[233463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:36 compute-0 python3.9[233465]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:36 compute-0 sudo[233463]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:36 compute-0 sudo[233615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdljbtzloruewcaprltfxrfuaesagopp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610916.7602968-1019-75287103860723/AnsiballZ_file.py'
Dec 13 07:28:36 compute-0 sudo[233615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:37 compute-0 python3.9[233617]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:37 compute-0 sudo[233615]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:37 compute-0 sudo[233767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxbxgdcmdxnnxdkqnsyamfxlxztfixrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610917.1895607-1019-197108115793838/AnsiballZ_file.py'
Dec 13 07:28:37 compute-0 sudo[233767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:37 compute-0 python3.9[233769]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:37 compute-0 sudo[233767]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:37 compute-0 ceph-mon[74928]: pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:37 compute-0 sudo[233919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkqyyzwlvdppqsiftmspnylwjchavwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610917.6146138-1019-253429905498126/AnsiballZ_file.py'
Dec 13 07:28:37 compute-0 sudo[233919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:37 compute-0 python3.9[233921]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:37 compute-0 sudo[233919]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:28:38
Dec 13 07:28:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:28:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:28:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'volumes', 'backups']
Dec 13 07:28:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:28:38 compute-0 sudo[234071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euflfxlvmruqoxwlhubqowggaddzimna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610918.0404959-1019-235763663774997/AnsiballZ_file.py'
Dec 13 07:28:38 compute-0 sudo[234071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:38 compute-0 python3.9[234073]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:38 compute-0 sudo[234071]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:38 compute-0 sudo[234223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcpvwxbmwhsmvgbeknqujrjbihfvwave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610918.5000207-1019-211552344762243/AnsiballZ_file.py'
Dec 13 07:28:38 compute-0 sudo[234223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:38 compute-0 python3.9[234225]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:38 compute-0 sudo[234223]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:28:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:28:39 compute-0 ceph-mon[74928]: pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:41 compute-0 ceph-mon[74928]: pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:28:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:28:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:28:41.637 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:28:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:28:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:28:41 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 13 07:28:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:42 compute-0 sudo[234377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sreyvehuqirobohbyocujikoqupzmweh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610922.5898352-1208-231815229425350/AnsiballZ_getent.py'
Dec 13 07:28:42 compute-0 sudo[234377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:43 compute-0 python3.9[234379]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 13 07:28:43 compute-0 sudo[234377]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:43 compute-0 sudo[234530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csoooxptsyjojvvyypxomkmjxiqljxfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610923.1819937-1216-190513283916652/AnsiballZ_group.py'
Dec 13 07:28:43 compute-0 sudo[234530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:43 compute-0 ceph-mon[74928]: pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:43 compute-0 python3.9[234532]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 07:28:43 compute-0 groupadd[234533]: group added to /etc/group: name=nova, GID=42436
Dec 13 07:28:43 compute-0 groupadd[234533]: group added to /etc/gshadow: name=nova
Dec 13 07:28:43 compute-0 groupadd[234533]: new group: name=nova, GID=42436
Dec 13 07:28:43 compute-0 sudo[234530]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:44 compute-0 sudo[234688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aamotniwhezpuvrbudjfonlxqtqtfvfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610923.7873678-1224-245402988558118/AnsiballZ_user.py'
Dec 13 07:28:44 compute-0 sudo[234688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:44 compute-0 python3.9[234690]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 07:28:44 compute-0 useradd[234693]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 13 07:28:44 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:28:44 compute-0 useradd[234693]: add 'nova' to group 'libvirt'
Dec 13 07:28:44 compute-0 useradd[234693]: add 'nova' to shadow group 'libvirt'
Dec 13 07:28:44 compute-0 sudo[234688]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:44 compute-0 podman[234692]: 2025-12-13 07:28:44.399226573 +0000 UTC m=+0.070919121 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 07:28:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:45 compute-0 sshd-session[234740]: Accepted publickey for zuul from 192.168.122.30 port 55228 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:28:45 compute-0 systemd-logind[745]: New session 53 of user zuul.
Dec 13 07:28:45 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 13 07:28:45 compute-0 sshd-session[234740]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:28:45 compute-0 sshd-session[234743]: Received disconnect from 192.168.122.30 port 55228:11: disconnected by user
Dec 13 07:28:45 compute-0 sshd-session[234743]: Disconnected from user zuul 192.168.122.30 port 55228
Dec 13 07:28:45 compute-0 sshd-session[234740]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:28:45 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 13 07:28:45 compute-0 systemd-logind[745]: Session 53 logged out. Waiting for processes to exit.
Dec 13 07:28:45 compute-0 systemd-logind[745]: Removed session 53.
Dec 13 07:28:45 compute-0 python3.9[234893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:45 compute-0 ceph-mon[74928]: pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:45 compute-0 python3.9[235014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610925.2766905-1249-19431420739786/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:46 compute-0 python3.9[235164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:46 compute-0 python3.9[235240]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:47 compute-0 python3.9[235390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:47 compute-0 python3.9[235511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610926.7019405-1249-219414858594350/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:47 compute-0 ceph-mon[74928]: pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:47 compute-0 python3.9[235661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:48 compute-0 python3.9[235782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610927.4570239-1249-121539690938640/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:28:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:28:48 compute-0 python3.9[235932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:48 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 13 07:28:48 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 13 07:28:48 compute-0 python3.9[236055]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610928.2436867-1249-236448744192640/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:49 compute-0 python3.9[236205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:49 compute-0 ceph-mon[74928]: pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:49 compute-0 python3.9[236326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610929.045829-1249-189956654126808/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:50 compute-0 sudo[236476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqprikbtjwhojedyigwrguxtsyehgnhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610929.8729084-1332-98267171433018/AnsiballZ_file.py'
Dec 13 07:28:50 compute-0 sudo[236476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:50 compute-0 python3.9[236478]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:50 compute-0 sudo[236476]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:50 compute-0 sudo[236628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpffyrheneooiwktwboddinfzqtdkjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610930.335854-1340-88672759022805/AnsiballZ_copy.py'
Dec 13 07:28:50 compute-0 sudo[236628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:50 compute-0 python3.9[236630]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:28:50 compute-0 sudo[236628]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:50 compute-0 sudo[236780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naibvhxmhjsxvxdxogwuvxnyzszgzdgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610930.8037949-1348-132942050820714/AnsiballZ_stat.py'
Dec 13 07:28:50 compute-0 sudo[236780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:51 compute-0 python3.9[236782]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:28:51 compute-0 sudo[236780]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:51 compute-0 sudo[236932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpeqmqfsbkpikuietrdzkmmuaixgczy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610931.257735-1356-109280930427810/AnsiballZ_stat.py'
Dec 13 07:28:51 compute-0 sudo[236932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:51 compute-0 python3.9[236934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:51 compute-0 sudo[236932]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:51 compute-0 ceph-mon[74928]: pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:51 compute-0 sudo[237055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkfrrrzefqzjjwvlrxkgimlrmqjxwrhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610931.257735-1356-109280930427810/AnsiballZ_copy.py'
Dec 13 07:28:51 compute-0 sudo[237055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:51 compute-0 python3.9[237057]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765610931.257735-1356-109280930427810/.source _original_basename=.4ku6dlpf follow=False checksum=c10c61059eb90b078036f954336eace3871dfb1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 13 07:28:51 compute-0 sudo[237055]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:52 compute-0 python3.9[237209]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:28:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:52 compute-0 python3.9[237361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:53 compute-0 python3.9[237482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610932.6285732-1382-160797082728960/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=209f20105d13c02e6cb251483bae1beb11a1258f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:53 compute-0 ceph-mon[74928]: pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:53 compute-0 python3.9[237632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 07:28:54 compute-0 podman[237727]: 2025-12-13 07:28:54.053589483 +0000 UTC m=+0.059463980 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 07:28:54 compute-0 python3.9[237763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610933.4733877-1397-239489721690463/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 07:28:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:54 compute-0 sudo[237926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amwwqqpjoxttxtmsxhlxqsppyiqtmjke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610934.3920133-1414-67466936422683/AnsiballZ_container_config_data.py'
Dec 13 07:28:54 compute-0 sudo[237926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:54 compute-0 python3.9[237928]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 13 07:28:54 compute-0 sudo[237926]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:55 compute-0 sudo[238078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujpfhshhrdegdoixvvmofzdvnymkvohp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610934.886967-1423-66638941810741/AnsiballZ_container_config_hash.py'
Dec 13 07:28:55 compute-0 sudo[238078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:55 compute-0 python3.9[238080]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 07:28:55 compute-0 sudo[238078]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:55 compute-0 sudo[238230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqdiivecrlcnrpvypvygoxurwsaydttm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610935.3969333-1433-154239298007994/AnsiballZ_edpm_container_manage.py'
Dec 13 07:28:55 compute-0 sudo[238230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:28:55 compute-0 ceph-mon[74928]: pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:55 compute-0 python3[238232]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 07:28:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:57 compute-0 sudo[238264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:28:57 compute-0 sudo[238264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:28:57 compute-0 sudo[238264]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:57 compute-0 sudo[238289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:28:57 compute-0 sudo[238289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:28:57 compute-0 ceph-mon[74928]: pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:28:58 compute-0 sudo[238289]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:28:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:28:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:28:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:28:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:28:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:28:58 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:28:58 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:28:58 compute-0 sudo[238342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:28:58 compute-0 sudo[238342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:28:58 compute-0 sudo[238342]: pam_unix(sudo:session): session closed for user root
Dec 13 07:28:58 compute-0 sudo[238367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:28:58 compute-0 sudo[238367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:28:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:28:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:28:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:28:58 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.671999411 +0000 UTC m=+0.075335244 container create 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 07:28:58 compute-0 systemd[1]: Started libpod-conmon-6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488.scope.
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.636698934 +0000 UTC m=+0.040034778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:28:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.734376949 +0000 UTC m=+0.137712782 container init 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.740085131 +0000 UTC m=+0.143420944 container start 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.741885056 +0000 UTC m=+0.145220879 container attach 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:28:58 compute-0 blissful_cerf[238415]: 167 167
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.743883805 +0000 UTC m=+0.147219638 container died 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 07:28:58 compute-0 systemd[1]: libpod-6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488.scope: Deactivated successfully.
Dec 13 07:28:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d57f2c544fa3c93dee744616260d6a57144eddd29abc8bbbe3312af734474cb-merged.mount: Deactivated successfully.
Dec 13 07:28:58 compute-0 podman[238402]: 2025-12-13 07:28:58.78178487 +0000 UTC m=+0.185120682 container remove 6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cerf, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:28:58 compute-0 systemd[1]: libpod-conmon-6eb5faac63842ff0796c2ec8eebdbcf8a6c64837ec705bb4fda9857b57c6c488.scope: Deactivated successfully.
Dec 13 07:28:59 compute-0 ceph-mon[74928]: pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:01 compute-0 anacron[30916]: Job `cron.daily' started
Dec 13 07:29:01 compute-0 anacron[30916]: Job `cron.daily' terminated
Dec 13 07:29:01 compute-0 ceph-mon[74928]: pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:03 compute-0 ceph-mon[74928]: pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:05 compute-0 ceph-mon[74928]: pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.056917437 +0000 UTC m=+7.131015176 container create f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 07:29:06 compute-0 podman[238475]: 2025-12-13 07:29:06.071683062 +0000 UTC m=+0.408781954 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 07:29:06 compute-0 podman[238243]: 2025-12-13 07:29:06.088696261 +0000 UTC m=+10.250228585 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 07:29:06 compute-0 systemd[1]: Started libpod-conmon-f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36.scope.
Dec 13 07:29:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.038054339 +0000 UTC m=+7.112152107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.131749313 +0000 UTC m=+7.205847071 container init f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.136988314 +0000 UTC m=+7.211086063 container start f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.138107659 +0000 UTC m=+7.212205407 container attach f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:29:06 compute-0 podman[238518]: 2025-12-13 07:29:06.192227391 +0000 UTC m=+0.027797559 container create 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 13 07:29:06 compute-0 podman[238518]: 2025-12-13 07:29:06.179872699 +0000 UTC m=+0.015442888 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 07:29:06 compute-0 python3[238232]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 13 07:29:06 compute-0 sudo[238230]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:06 compute-0 wizardly_kowalevski[238495]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:29:06 compute-0 wizardly_kowalevski[238495]: --> All data devices are unavailable
Dec 13 07:29:06 compute-0 systemd[1]: libpod-f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36.scope: Deactivated successfully.
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.539487633 +0000 UTC m=+7.613585381 container died f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-63a2d243eb7604d7a59c8f35917ec9ddfd33340facd340a34cbf51c877db93cb-merged.mount: Deactivated successfully.
Dec 13 07:29:06 compute-0 podman[238437]: 2025-12-13 07:29:06.570372866 +0000 UTC m=+7.644470614 container remove f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:29:06 compute-0 systemd[1]: libpod-conmon-f4c70ed9072a9dccaec7a9d50642c52f6a385a53bf9515dfafc756fd2cc42a36.scope: Deactivated successfully.
Dec 13 07:29:06 compute-0 sudo[238367]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:06 compute-0 sudo[238744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkzoygaelfdizchhhxkrcpttjfnuzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610946.4106026-1441-1466043010655/AnsiballZ_stat.py'
Dec 13 07:29:06 compute-0 sudo[238702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:29:06 compute-0 sudo[238744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:06 compute-0 sudo[238702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:29:06 compute-0 sudo[238702]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:06 compute-0 sudo[238749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:29:06 compute-0 sudo[238749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:29:06 compute-0 python3.9[238747]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:29:06 compute-0 sudo[238744]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.909426028 +0000 UTC m=+0.026116227 container create 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:29:06 compute-0 systemd[1]: Started libpod-conmon-6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2.scope.
Dec 13 07:29:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.963805758 +0000 UTC m=+0.080495967 container init 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.96894483 +0000 UTC m=+0.085635019 container start 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.970281464 +0000 UTC m=+0.086971673 container attach 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 07:29:06 compute-0 great_ganguly[238823]: 167 167
Dec 13 07:29:06 compute-0 systemd[1]: libpod-6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2.scope: Deactivated successfully.
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.972764984 +0000 UTC m=+0.089455173 container died 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-acd48edb1638c3f273fa236ff717d84fe465287c205c137dd1c16b08ebabe262-merged.mount: Deactivated successfully.
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.994124276 +0000 UTC m=+0.110814464 container remove 6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ganguly, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:29:06 compute-0 podman[238808]: 2025-12-13 07:29:06.898866402 +0000 UTC m=+0.015556611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:29:07 compute-0 systemd[1]: libpod-conmon-6744b0dab89b28e21eeb542c0fb66ecf64b09b7bda8745c47dd12cb81758eaa2.scope: Deactivated successfully.
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.114043589 +0000 UTC m=+0.028355387 container create 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:29:07 compute-0 systemd[1]: Started libpod-conmon-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope.
Dec 13 07:29:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.170222502 +0000 UTC m=+0.084534311 container init 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.17630499 +0000 UTC m=+0.090616777 container start 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.177708138 +0000 UTC m=+0.092019946 container attach 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.102071148 +0000 UTC m=+0.016382946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:29:07 compute-0 sudo[238990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fatnjvcogbioyoroesimitpaljbxadqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610947.1512444-1453-138633976342914/AnsiballZ_container_config_data.py'
Dec 13 07:29:07 compute-0 sudo[238990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:07 compute-0 eloquent_jones[238876]: {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:     "0": [
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:         {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "devices": [
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "/dev/loop3"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             ],
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_name": "ceph_lv0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_size": "21470642176",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "name": "ceph_lv0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "tags": {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cluster_name": "ceph",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.crush_device_class": "",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.encrypted": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.objectstore": "bluestore",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osd_id": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.type": "block",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.vdo": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.with_tpm": "0"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             },
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "type": "block",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "vg_name": "ceph_vg0"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:         }
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:     ],
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:     "1": [
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:         {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "devices": [
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "/dev/loop4"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             ],
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_name": "ceph_lv1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_size": "21470642176",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "name": "ceph_lv1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "tags": {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cluster_name": "ceph",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.crush_device_class": "",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.encrypted": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.objectstore": "bluestore",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osd_id": "1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.type": "block",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.vdo": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.with_tpm": "0"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             },
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "type": "block",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "vg_name": "ceph_vg1"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:         }
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:     ],
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:     "2": [
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:         {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "devices": [
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "/dev/loop5"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             ],
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_name": "ceph_lv2",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_size": "21470642176",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "name": "ceph_lv2",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "tags": {
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.cluster_name": "ceph",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.crush_device_class": "",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.encrypted": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.objectstore": "bluestore",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osd_id": "2",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.type": "block",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.vdo": "0",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:                 "ceph.with_tpm": "0"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             },
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "type": "block",
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:             "vg_name": "ceph_vg2"
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:         }
Dec 13 07:29:07 compute-0 eloquent_jones[238876]:     ]
Dec 13 07:29:07 compute-0 eloquent_jones[238876]: }
Dec 13 07:29:07 compute-0 systemd[1]: libpod-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope: Deactivated successfully.
Dec 13 07:29:07 compute-0 conmon[238876]: conmon 68073674a53a0d39a456 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope/container/memory.events
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.425277973 +0000 UTC m=+0.339589781 container died 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5631f2d8b7a9e7b575ccdf9d4254ee9af09e6532f7fd5abe85cb50ef625f82ae-merged.mount: Deactivated successfully.
Dec 13 07:29:07 compute-0 podman[238845]: 2025-12-13 07:29:07.449580819 +0000 UTC m=+0.363892607 container remove 68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jones, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:29:07 compute-0 systemd[1]: libpod-conmon-68073674a53a0d39a456638d5351719bde05171dcd3f002fbf2c9864277a590a.scope: Deactivated successfully.
Dec 13 07:29:07 compute-0 sudo[238749]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:07 compute-0 sudo[239006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:29:07 compute-0 sudo[239006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:29:07 compute-0 sudo[239006]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:07 compute-0 python3.9[238992]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 13 07:29:07 compute-0 sudo[238990]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:07 compute-0 sudo[239031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:29:07 compute-0 sudo[239031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:29:07 compute-0 ceph-mon[74928]: pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.800468844 +0000 UTC m=+0.029971688 container create 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 07:29:07 compute-0 systemd[1]: Started libpod-conmon-3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343.scope.
Dec 13 07:29:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.853526437 +0000 UTC m=+0.083029291 container init 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.858487055 +0000 UTC m=+0.087989888 container start 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.85959057 +0000 UTC m=+0.089093403 container attach 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:29:07 compute-0 hopeful_poincare[239196]: 167 167
Dec 13 07:29:07 compute-0 systemd[1]: libpod-3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343.scope: Deactivated successfully.
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.863301267 +0000 UTC m=+0.092804121 container died 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4d95096dff4beac66d4555b8c5e77543bcc44fc3780ca85921da1e00571bbfa-merged.mount: Deactivated successfully.
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.884089445 +0000 UTC m=+0.113592279 container remove 3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_poincare, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:29:07 compute-0 podman[239146]: 2025-12-13 07:29:07.788578817 +0000 UTC m=+0.018081671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:29:07 compute-0 systemd[1]: libpod-conmon-3479ccf8cebb1c00379f1f09e8b0c8c39ea4c127efe8a7ba5d0342887efe2343.scope: Deactivated successfully.
Dec 13 07:29:07 compute-0 sudo[239241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhjpxrkuovzttliwphgkdhrynsedyhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610947.701985-1462-227957058980156/AnsiballZ_container_config_hash.py'
Dec 13 07:29:07 compute-0 sudo[239241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:08.008768168 +0000 UTC m=+0.028293561 container create 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:29:08 compute-0 systemd[1]: Started libpod-conmon-4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8.scope.
Dec 13 07:29:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:08.069739663 +0000 UTC m=+0.089265056 container init 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:08.075211722 +0000 UTC m=+0.094737115 container start 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:08.077841005 +0000 UTC m=+0.097366398 container attach 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 07:29:08 compute-0 python3.9[239245]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 07:29:08 compute-0 sudo[239241]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:07.997269497 +0000 UTC m=+0.016794900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:29:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:08 compute-0 sudo[239479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymecbwwkovcmunjscitiahbadnvtnfve ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765610948.3296523-1472-9669500870125/AnsiballZ_edpm_container_manage.py'
Dec 13 07:29:08 compute-0 sudo[239479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:08 compute-0 lvm[239494]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:29:08 compute-0 lvm[239493]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:29:08 compute-0 lvm[239494]: VG ceph_vg1 finished
Dec 13 07:29:08 compute-0 lvm[239493]: VG ceph_vg0 finished
Dec 13 07:29:08 compute-0 lvm[239497]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:29:08 compute-0 lvm[239497]: VG ceph_vg2 finished
Dec 13 07:29:08 compute-0 reverent_black[239264]: {}
Dec 13 07:29:08 compute-0 lvm[239499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:29:08 compute-0 lvm[239499]: VG ceph_vg2 finished
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:08.719389628 +0000 UTC m=+0.738915022 container died 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 07:29:08 compute-0 systemd[1]: libpod-4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8.scope: Deactivated successfully.
Dec 13 07:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b37cf8a6036ab79ead7f23d0a0410ebedb7e5c3a37a32e4feaa10ab44d92f54-merged.mount: Deactivated successfully.
Dec 13 07:29:08 compute-0 lvm[239502]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:29:08 compute-0 lvm[239502]: VG ceph_vg2 finished
Dec 13 07:29:08 compute-0 podman[239251]: 2025-12-13 07:29:08.742245224 +0000 UTC m=+0.761770617 container remove 4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:29:08 compute-0 systemd[1]: libpod-conmon-4ccc05e0f2afbc4570ca5ba20fa61f71dea1f5ef620a1f9d88f0bf43aac2a0d8.scope: Deactivated successfully.
Dec 13 07:29:08 compute-0 sudo[239031]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:29:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:29:08 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:29:08 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:29:08 compute-0 sudo[239511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:29:08 compute-0 sudo[239511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:29:08 compute-0 sudo[239511]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:08 compute-0 python3[239483]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 07:29:08 compute-0 podman[239564]: 2025-12-13 07:29:08.956343733 +0000 UTC m=+0.030252505 container create 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:29:08 compute-0 podman[239564]: 2025-12-13 07:29:08.942722942 +0000 UTC m=+0.016631723 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 07:29:08 compute-0 python3[239483]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Dec 13 07:29:09 compute-0 sudo[239479]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:29:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:29:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:29:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:29:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:29:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:29:09 compute-0 sudo[239740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjywupddrkjbkgsckkbngndhwqluaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610949.1890378-1480-243439913957876/AnsiballZ_stat.py'
Dec 13 07:29:09 compute-0 sudo[239740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:09 compute-0 python3.9[239742]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:29:09 compute-0 sudo[239740]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:09 compute-0 ceph-mon[74928]: pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:29:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:29:09 compute-0 sudo[239894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fundrkvcagehnrjgistipexwombsryuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610949.722936-1489-166446622881529/AnsiballZ_file.py'
Dec 13 07:29:09 compute-0 sudo[239894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:10 compute-0 python3.9[239896]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:29:10 compute-0 sudo[239894]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:10 compute-0 sudo[240045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifxrnauklgequhopihyssmcqaatsqeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610950.1174004-1489-120691869066872/AnsiballZ_copy.py'
Dec 13 07:29:10 compute-0 sudo[240045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:10 compute-0 python3.9[240047]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610950.1174004-1489-120691869066872/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 07:29:10 compute-0 sudo[240045]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:10 compute-0 sudo[240121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgyhfbeoceseyaqcgybxsdtytfmksfvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610950.1174004-1489-120691869066872/AnsiballZ_systemd.py'
Dec 13 07:29:10 compute-0 sudo[240121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:10 compute-0 virtnodedevd[204194]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 07:29:10 compute-0 virtnodedevd[204194]: hostname: compute-0
Dec 13 07:29:10 compute-0 virtnodedevd[204194]: Make forcefull daemon shutdown
Dec 13 07:29:10 compute-0 systemd[1]: virtnodedevd.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 07:29:10 compute-0 python3.9[240123]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 07:29:10 compute-0 systemd[1]: virtnodedevd.service: Failed with result 'exit-code'.
Dec 13 07:29:10 compute-0 systemd[1]: Reloading.
Dec 13 07:29:11 compute-0 systemd-sysv-generator[240147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:29:11 compute-0 systemd-rc-local-generator[240143]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:29:11 compute-0 systemd[1]: virtnodedevd.service: Scheduled restart job, restart counter is at 1.
Dec 13 07:29:11 compute-0 systemd[1]: Stopped libvirt nodedev daemon.
Dec 13 07:29:11 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 07:29:11 compute-0 sudo[240121]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:11 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 13 07:29:11 compute-0 sudo[240255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjcgcjdwlfazbvsiopcranvjoruwegy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610950.1174004-1489-120691869066872/AnsiballZ_systemd.py'
Dec 13 07:29:11 compute-0 sudo[240255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:11 compute-0 python3.9[240257]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 07:29:11 compute-0 ceph-mon[74928]: pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:11 compute-0 systemd[1]: Reloading.
Dec 13 07:29:11 compute-0 systemd-sysv-generator[240283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 07:29:11 compute-0 systemd-rc-local-generator[240280]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 07:29:11 compute-0 systemd[1]: Starting nova_compute container...
Dec 13 07:29:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:12 compute-0 podman[240296]: 2025-12-13 07:29:12.027821817 +0000 UTC m=+0.067168519 container init 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 07:29:12 compute-0 podman[240296]: 2025-12-13 07:29:12.034803604 +0000 UTC m=+0.074150306 container start 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:29:12 compute-0 podman[240296]: nova_compute
Dec 13 07:29:12 compute-0 nova_compute[240308]: + sudo -E kolla_set_configs
Dec 13 07:29:12 compute-0 systemd[1]: Started nova_compute container.
Dec 13 07:29:12 compute-0 sudo[240255]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Validating config file
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying service configuration files
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Deleting /etc/ceph
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Creating directory /etc/ceph
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Writing out command to execute
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:12 compute-0 nova_compute[240308]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 07:29:12 compute-0 nova_compute[240308]: ++ cat /run_command
Dec 13 07:29:12 compute-0 nova_compute[240308]: + CMD=nova-compute
Dec 13 07:29:12 compute-0 nova_compute[240308]: + ARGS=
Dec 13 07:29:12 compute-0 nova_compute[240308]: + sudo kolla_copy_cacerts
Dec 13 07:29:12 compute-0 nova_compute[240308]: + [[ ! -n '' ]]
Dec 13 07:29:12 compute-0 nova_compute[240308]: + . kolla_extend_start
Dec 13 07:29:12 compute-0 nova_compute[240308]: Running command: 'nova-compute'
Dec 13 07:29:12 compute-0 nova_compute[240308]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 07:29:12 compute-0 nova_compute[240308]: + umask 0022
Dec 13 07:29:12 compute-0 nova_compute[240308]: + exec nova-compute
Dec 13 07:29:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:12 compute-0 python3.9[240469]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:29:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:13 compute-0 python3.9[240620]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:29:13 compute-0 ceph-mon[74928]: pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:13 compute-0 python3.9[240770]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.845 240312 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.846 240312 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.846 240312 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.846 240312 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.953 240312 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.965 240312 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:29:13 compute-0 nova_compute[240308]: 2025-12-13 07:29:13.965 240312 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 13 07:29:14 compute-0 sudo[240924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcbjvzgjiilxonrrdtoeojhrnjnhyevq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610953.9648945-1549-235308334344564/AnsiballZ_podman_container.py'
Dec 13 07:29:14 compute-0 sudo[240924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.442 240312 INFO nova.virt.driver [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 13 07:29:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.545 240312 INFO nova.compute.provider_config [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 13 07:29:14 compute-0 python3.9[240926]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.556 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 07:29:14 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.556 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.556 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.557 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.558 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.559 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.560 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.561 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.562 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.563 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.564 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.565 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.566 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.567 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.568 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.569 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.570 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.571 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.572 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.573 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.574 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.575 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.576 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.577 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.578 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.579 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.580 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.581 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.582 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.583 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.584 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.585 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.586 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.587 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.588 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.589 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.590 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.591 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.592 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.593 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.594 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.595 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.596 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.597 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.598 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.599 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.600 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.601 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.602 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.603 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.604 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.605 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.606 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.607 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.608 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.609 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.610 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.611 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.612 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.613 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.614 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.615 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.616 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.617 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.618 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.619 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.620 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.621 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.622 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.623 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.624 240312 WARNING oslo_config.cfg [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 07:29:14 compute-0 nova_compute[240308]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 07:29:14 compute-0 nova_compute[240308]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 07:29:14 compute-0 nova_compute[240308]: and ``live_migration_inbound_addr`` respectively.
Dec 13 07:29:14 compute-0 nova_compute[240308]: ).  Its value may be silently ignored in the future.
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.625 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.626 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_secret_uuid        = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.627 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.628 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.629 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.630 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.631 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.632 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.633 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.634 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.635 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.636 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.637 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.638 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.639 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.640 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.641 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.642 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.643 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.644 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.645 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.645 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.645 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.646 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.647 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.648 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.649 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.650 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.651 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.652 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.653 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.654 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.655 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 sudo[240924]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.655 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.657 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.658 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.659 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.660 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.661 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.662 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.663 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.664 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.665 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.666 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.667 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.668 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.669 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.670 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.671 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.672 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.673 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.674 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.675 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.676 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.677 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.678 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.679 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.680 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.681 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.682 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.683 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.684 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.685 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.686 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.687 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.688 240312 DEBUG oslo_service.service [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.688 240312 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.697 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 13 07:29:14 compute-0 podman[240944]: 2025-12-13 07:29:14.699137807 +0000 UTC m=+0.045940964 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.698 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.698 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.698 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 13 07:29:14 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 07:29:14 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.748 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f29569fad60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.750 240312 DEBUG nova.virt.libvirt.host [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f29569fad60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.750 240312 INFO nova.virt.libvirt.driver [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Connection event '1' reason 'None'
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.762 240312 WARNING nova.virt.libvirt.driver [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 13 07:29:14 compute-0 nova_compute[240308]: 2025-12-13 07:29:14.762 240312 DEBUG nova.virt.libvirt.volume.mount [None req-4f209c23-1ed1-4eec-a54b-5746b57a65ab - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 13 07:29:14 compute-0 sudo[241163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhlrufcohdtobgqmprqtdvfkfuovihqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610954.7969959-1557-81203938659014/AnsiballZ_systemd.py'
Dec 13 07:29:14 compute-0 sudo[241163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:15 compute-0 python3.9[241165]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 07:29:15 compute-0 systemd[1]: Stopping nova_compute container...
Dec 13 07:29:15 compute-0 nova_compute[240308]: 2025-12-13 07:29:15.300 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 07:29:15 compute-0 nova_compute[240308]: 2025-12-13 07:29:15.300 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 07:29:15 compute-0 nova_compute[240308]: 2025-12-13 07:29:15.300 240312 DEBUG oslo_concurrency.lockutils [None req-d6ef4c52-d52a-430d-b2a4-7b44549bc916 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 07:29:15 compute-0 ceph-mon[74928]: pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:15 compute-0 virtqemud[241006]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 07:29:15 compute-0 virtqemud[241006]: hostname: compute-0
Dec 13 07:29:15 compute-0 virtqemud[241006]: End of file while reading data: Input/output error
Dec 13 07:29:15 compute-0 systemd[1]: libpod-85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306.scope: Deactivated successfully.
Dec 13 07:29:15 compute-0 podman[241177]: 2025-12-13 07:29:15.698001979 +0000 UTC m=+0.423056443 container died 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 07:29:15 compute-0 systemd[1]: libpod-85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306.scope: Consumed 2.424s CPU time.
Dec 13 07:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306-userdata-shm.mount: Deactivated successfully.
Dec 13 07:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713-merged.mount: Deactivated successfully.
Dec 13 07:29:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:17 compute-0 podman[241177]: 2025-12-13 07:29:17.521266579 +0000 UTC m=+2.246321043 container cleanup 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 07:29:17 compute-0 podman[241177]: nova_compute
Dec 13 07:29:17 compute-0 podman[241200]: nova_compute
Dec 13 07:29:17 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 13 07:29:17 compute-0 systemd[1]: Stopped nova_compute container.
Dec 13 07:29:17 compute-0 systemd[1]: Starting nova_compute container...
Dec 13 07:29:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b99b624df972157fdfc5e1964b2f648ce2236d9f885434f04fdbb4225095713/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:17 compute-0 podman[241209]: 2025-12-13 07:29:17.654301554 +0000 UTC m=+0.069386768 container init 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202)
Dec 13 07:29:17 compute-0 podman[241209]: 2025-12-13 07:29:17.658708691 +0000 UTC m=+0.073793905 container start 85647c0fe69de86f6774be6672640c6d932467db58f079f97876cd7dae0a1306 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 07:29:17 compute-0 podman[241209]: nova_compute
Dec 13 07:29:17 compute-0 nova_compute[241222]: + sudo -E kolla_set_configs
Dec 13 07:29:17 compute-0 systemd[1]: Started nova_compute container.
Dec 13 07:29:17 compute-0 sudo[241163]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:17 compute-0 ceph-mon[74928]: pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Validating config file
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying service configuration files
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /etc/ceph
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Creating directory /etc/ceph
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Writing out command to execute
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:17 compute-0 nova_compute[241222]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 07:29:17 compute-0 nova_compute[241222]: ++ cat /run_command
Dec 13 07:29:17 compute-0 nova_compute[241222]: + CMD=nova-compute
Dec 13 07:29:17 compute-0 nova_compute[241222]: + ARGS=
Dec 13 07:29:17 compute-0 nova_compute[241222]: + sudo kolla_copy_cacerts
Dec 13 07:29:17 compute-0 nova_compute[241222]: Running command: 'nova-compute'
Dec 13 07:29:17 compute-0 nova_compute[241222]: + [[ ! -n '' ]]
Dec 13 07:29:17 compute-0 nova_compute[241222]: + . kolla_extend_start
Dec 13 07:29:17 compute-0 nova_compute[241222]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 07:29:17 compute-0 nova_compute[241222]: + umask 0022
Dec 13 07:29:17 compute-0 nova_compute[241222]: + exec nova-compute
Dec 13 07:29:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:18 compute-0 sudo[241383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijuxisddnkwmyqrvtlcpcrpkyhkzgpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765610957.827696-1566-178254160862126/AnsiballZ_podman_container.py'
Dec 13 07:29:18 compute-0 sudo[241383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:29:18 compute-0 python3.9[241385]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 07:29:18 compute-0 systemd[1]: Started libpod-conmon-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e.scope.
Dec 13 07:29:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 13 07:29:18 compute-0 podman[241404]: 2025-12-13 07:29:18.368301657 +0000 UTC m=+0.087616576 container init 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 13 07:29:18 compute-0 podman[241404]: 2025-12-13 07:29:18.373203604 +0000 UTC m=+0.092518524 container start 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 07:29:18 compute-0 python3.9[241385]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Applying nova statedir ownership
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 13 07:29:18 compute-0 nova_compute_init[241423]: INFO:nova_statedir:Nova statedir ownership complete
Dec 13 07:29:18 compute-0 systemd[1]: libpod-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e.scope: Deactivated successfully.
Dec 13 07:29:18 compute-0 podman[241435]: 2025-12-13 07:29:18.454672721 +0000 UTC m=+0.025737776 container died 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 07:29:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e-userdata-shm.mount: Deactivated successfully.
Dec 13 07:29:18 compute-0 podman[241435]: 2025-12-13 07:29:18.468223509 +0000 UTC m=+0.039288545 container cleanup 547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec 13 07:29:18 compute-0 sudo[241383]: pam_unix(sudo:session): session closed for user root
Dec 13 07:29:18 compute-0 systemd[1]: libpod-conmon-547dab5fa81042d08d1db983dbc1e200a1ce8b11b8dce99b2f25723fa6b8123e.scope: Deactivated successfully.
Dec 13 07:29:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb34115f18d67d1db81bec5df2bdc2ae866c8f99644700a5584e1256100a924-merged.mount: Deactivated successfully.
Dec 13 07:29:18 compute-0 sshd-session[212095]: Connection closed by 192.168.122.30 port 52180
Dec 13 07:29:18 compute-0 sshd-session[212092]: pam_unix(sshd:session): session closed for user zuul
Dec 13 07:29:18 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 13 07:29:18 compute-0 systemd[1]: session-52.scope: Consumed 1min 39.414s CPU time.
Dec 13 07:29:18 compute-0 systemd-logind[745]: Session 52 logged out. Waiting for processes to exit.
Dec 13 07:29:18 compute-0 systemd-logind[745]: Removed session 52.
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.435 241226 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.436 241226 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.436 241226 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.436 241226 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.549 241226 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.559 241226 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.559 241226 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 13 07:29:19 compute-0 ceph-mon[74928]: pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:19 compute-0 nova_compute[241222]: 2025-12-13 07:29:19.928 241226 INFO nova.virt.driver [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.016 241226 INFO nova.compute.provider_config [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_concurrency.lockutils [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_concurrency.lockutils [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_concurrency.lockutils [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.026 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.027 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.028 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.029 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.030 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.031 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.032 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.033 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.034 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.035 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.036 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.037 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.038 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.039 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.040 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.041 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.042 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.043 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.044 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.045 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.046 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.047 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.048 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.049 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.050 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.051 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.052 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.053 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.054 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.055 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.056 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.057 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.058 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.059 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.060 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.061 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.062 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.063 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.064 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.065 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.066 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.067 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.068 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.069 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.070 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.071 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.072 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.073 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.074 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.075 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.076 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.077 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.078 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.079 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.080 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.081 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.082 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.083 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.084 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.085 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.086 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.087 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.088 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.089 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.090 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.091 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.092 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.093 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.094 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.095 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 WARNING oslo_config.cfg [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 07:29:20 compute-0 nova_compute[241222]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 07:29:20 compute-0 nova_compute[241222]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 07:29:20 compute-0 nova_compute[241222]: and ``live_migration_inbound_addr`` respectively.
Dec 13 07:29:20 compute-0 nova_compute[241222]: ).  Its value may be silently ignored in the future.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.096 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.097 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.098 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_secret_uuid        = 00fdae1b-7fad-5f1b-8734-ba4d9298a6de log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.099 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.100 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.101 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.102 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.103 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.104 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.105 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.106 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.107 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.108 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.109 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.110 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.111 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.112 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.113 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.114 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.115 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.116 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.117 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.118 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.119 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.120 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.121 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.122 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.123 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.124 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.125 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.126 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.127 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.128 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.129 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.130 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.131 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.132 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.133 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.134 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.135 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.136 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.137 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.138 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.139 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.140 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.141 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.142 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.143 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.144 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.145 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.146 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.147 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.148 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.149 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.150 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.151 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.152 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.153 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.154 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.155 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.156 241226 DEBUG oslo_service.service [None req-57c63b2c-2d53-49c9-9ef5-1033c279917c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.157 241226 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.180 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.181 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.181 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.181 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.190 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4341865be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.192 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4341865be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.192 241226 INFO nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Connection event '1' reason 'None'
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.196 241226 INFO nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host capabilities <capabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]: 
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <host>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <uuid>bdf0d7c0-5eef-46ac-89a1-b1ab7cc430f1</uuid>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <arch>x86_64</arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model>EPYC-Milan-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <vendor>AMD</vendor>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <microcode version='167776725'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <signature family='25' model='1' stepping='1'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <maxphysaddr mode='emulate' bits='48'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='x2apic'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='tsc-deadline'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='osxsave'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='hypervisor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='tsc_adjust'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='ospke'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='vaes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='vpclmulqdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='spec-ctrl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='stibp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='arch-capabilities'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='cmp_legacy'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='virt-ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='lbrv'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='tsc-scale'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='vmcb-clean'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='pause-filter'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='pfthreshold'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='v-vmsave-vmload'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='vgif'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='rdctl-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='skip-l1dfl-vmentry'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='mds-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature name='pschange-mc-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <pages unit='KiB' size='4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <pages unit='KiB' size='2048'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <pages unit='KiB' size='1048576'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <power_management>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <suspend_mem/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </power_management>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <iommu support='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <migration_features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <live/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <uri_transports>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <uri_transport>tcp</uri_transport>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <uri_transport>rdma</uri_transport>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </uri_transports>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </migration_features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <topology>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <cells num='1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <cell id='0'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           <memory unit='KiB'>7865356</memory>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           <pages unit='KiB' size='4'>1966339</pages>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           <pages unit='KiB' size='2048'>0</pages>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           <distances>
Dec 13 07:29:20 compute-0 nova_compute[241222]:             <sibling id='0' value='10'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           </distances>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           <cpus num='4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:           </cpus>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         </cell>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </cells>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </topology>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <cache>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </cache>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <secmodel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model>selinux</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <doi>0</doi>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </secmodel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <secmodel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model>dac</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <doi>0</doi>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </secmodel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </host>
Dec 13 07:29:20 compute-0 nova_compute[241222]: 
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <guest>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <os_type>hvm</os_type>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <arch name='i686'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <wordsize>32</wordsize>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <domain type='qemu'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <domain type='kvm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <pae/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <nonpae/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <acpi default='on' toggle='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <apic default='on' toggle='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <cpuselection/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <deviceboot/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <disksnapshot default='on' toggle='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <externalSnapshot/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </guest>
Dec 13 07:29:20 compute-0 nova_compute[241222]: 
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <guest>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <os_type>hvm</os_type>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <arch name='x86_64'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <wordsize>64</wordsize>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <domain type='qemu'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <domain type='kvm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <acpi default='on' toggle='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <apic default='on' toggle='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <cpuselection/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <deviceboot/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <disksnapshot default='on' toggle='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <externalSnapshot/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </guest>
Dec 13 07:29:20 compute-0 nova_compute[241222]: 
Dec 13 07:29:20 compute-0 nova_compute[241222]: </capabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]: 
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.201 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.203 241226 WARNING nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.203 241226 DEBUG nova.virt.libvirt.volume.mount [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.217 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 13 07:29:20 compute-0 nova_compute[241222]: <domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <domain>kvm</domain>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <arch>i686</arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <vcpu max='4096'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <iothreads supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <os supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='firmware'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <loader supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>rom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pflash</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='readonly'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>yes</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='secure'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </loader>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </os>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-passthrough' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='hostPassthroughMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='maximum' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='maximumMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-model' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model fallback='forbid'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <vendor>AMD</vendor>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='x2apic'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='hypervisor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vaes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vpclmulqdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='stibp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='overflow-recov'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='succor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lbrv'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-scale'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='flushbyasid'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pause-filter'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pfthreshold'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vgif'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='custom' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Milan-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-128'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-256'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-512'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v6'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v7'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <memoryBacking supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='sourceType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>anonymous</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>memfd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </memoryBacking>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <disk supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='diskDevice'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>disk</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cdrom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>floppy</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>lun</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>fdc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>sata</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </disk>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <graphics supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vnc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egl-headless</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </graphics>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <video supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='modelType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vga</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cirrus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>none</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>bochs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ramfb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </video>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hostdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='mode'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>subsystem</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='startupPolicy'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>mandatory</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>requisite</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>optional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='subsysType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pci</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='capsType'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='pciBackend'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hostdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <rng supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>random</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </rng>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <filesystem supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='driverType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>path</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>handle</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtiofs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </filesystem>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <tpm supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-tis</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-crb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emulator</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>external</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendVersion'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>2.0</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </tpm>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <redirdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </redirdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <channel supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </channel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <crypto supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </crypto>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <interface supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>passt</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </interface>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <panic supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>isa</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>hyperv</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </panic>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <console supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>null</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dev</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pipe</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stdio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>udp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tcp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu-vdagent</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </console>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <gic supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <vmcoreinfo supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <genid supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backingStoreInput supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backup supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <async-teardown supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <ps2 supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sev supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sgx supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hyperv supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='features'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>relaxed</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vapic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>spinlocks</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vpindex</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>runtime</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>synic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stimer</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reset</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vendor_id</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>frequencies</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reenlightenment</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tlbflush</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ipi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>avic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emsr_bitmap</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>xmm_input</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <spinlocks>4095</spinlocks>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <stimer_direct>on</stimer_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hyperv>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <launchSecurity supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='sectype'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tdx</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </launchSecurity>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </features>
Dec 13 07:29:20 compute-0 nova_compute[241222]: </domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.225 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 13 07:29:20 compute-0 nova_compute[241222]: <domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <domain>kvm</domain>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <arch>i686</arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <vcpu max='240'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <iothreads supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <os supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='firmware'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <loader supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>rom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pflash</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='readonly'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>yes</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='secure'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </loader>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </os>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-passthrough' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='hostPassthroughMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='maximum' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='maximumMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-model' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model fallback='forbid'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <vendor>AMD</vendor>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='x2apic'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='hypervisor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vaes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vpclmulqdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='stibp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='overflow-recov'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='succor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lbrv'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-scale'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='flushbyasid'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pause-filter'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pfthreshold'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vgif'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='custom' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Milan-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-128'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-256'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-512'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v6'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v7'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <memoryBacking supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='sourceType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>anonymous</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>memfd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </memoryBacking>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <disk supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='diskDevice'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>disk</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cdrom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>floppy</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>lun</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ide</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>fdc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>sata</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </disk>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <graphics supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vnc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egl-headless</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </graphics>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <video supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='modelType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vga</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cirrus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>none</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>bochs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ramfb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </video>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hostdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='mode'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>subsystem</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='startupPolicy'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>mandatory</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>requisite</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>optional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='subsysType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pci</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='capsType'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='pciBackend'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hostdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <rng supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>random</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </rng>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <filesystem supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='driverType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>path</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>handle</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtiofs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </filesystem>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <tpm supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-tis</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-crb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emulator</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>external</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendVersion'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>2.0</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </tpm>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <redirdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </redirdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <channel supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </channel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <crypto supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </crypto>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <interface supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>passt</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </interface>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <panic supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>isa</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>hyperv</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </panic>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <console supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>null</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dev</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pipe</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stdio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>udp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tcp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu-vdagent</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </console>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <gic supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <vmcoreinfo supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <genid supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backingStoreInput supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backup supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <async-teardown supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <ps2 supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sev supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sgx supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hyperv supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='features'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>relaxed</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vapic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>spinlocks</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vpindex</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>runtime</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>synic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stimer</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reset</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vendor_id</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>frequencies</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reenlightenment</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tlbflush</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ipi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>avic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emsr_bitmap</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>xmm_input</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <spinlocks>4095</spinlocks>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <stimer_direct>on</stimer_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hyperv>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <launchSecurity supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='sectype'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tdx</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </launchSecurity>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </features>
Dec 13 07:29:20 compute-0 nova_compute[241222]: </domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.226 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.228 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 13 07:29:20 compute-0 nova_compute[241222]: <domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <domain>kvm</domain>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <arch>x86_64</arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <vcpu max='4096'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <iothreads supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <os supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='firmware'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>efi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <loader supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>rom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pflash</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='readonly'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>yes</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='secure'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>yes</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </loader>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </os>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-passthrough' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='hostPassthroughMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='maximum' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='maximumMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-model' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model fallback='forbid'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <vendor>AMD</vendor>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='x2apic'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='hypervisor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vaes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vpclmulqdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='stibp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='overflow-recov'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='succor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lbrv'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-scale'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='flushbyasid'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pause-filter'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pfthreshold'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vgif'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='custom' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Milan-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-128'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-256'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-512'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v6'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v7'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <memoryBacking supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='sourceType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>anonymous</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>memfd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </memoryBacking>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <disk supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='diskDevice'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>disk</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cdrom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>floppy</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>lun</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>fdc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>sata</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </disk>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <graphics supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vnc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egl-headless</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </graphics>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <video supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='modelType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vga</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cirrus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>none</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>bochs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ramfb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </video>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hostdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='mode'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>subsystem</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='startupPolicy'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>mandatory</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>requisite</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>optional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='subsysType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pci</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='capsType'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='pciBackend'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hostdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <rng supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>random</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </rng>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <filesystem supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='driverType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>path</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>handle</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtiofs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </filesystem>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <tpm supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-tis</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-crb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emulator</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>external</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendVersion'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>2.0</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </tpm>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <redirdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </redirdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <channel supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </channel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <crypto supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </crypto>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <interface supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>passt</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </interface>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <panic supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>isa</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>hyperv</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </panic>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <console supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>null</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dev</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pipe</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stdio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>udp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tcp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu-vdagent</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </console>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <gic supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <vmcoreinfo supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <genid supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backingStoreInput supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backup supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <async-teardown supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <ps2 supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sev supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sgx supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hyperv supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='features'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>relaxed</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vapic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>spinlocks</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vpindex</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>runtime</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>synic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stimer</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reset</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vendor_id</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>frequencies</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reenlightenment</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tlbflush</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ipi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>avic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emsr_bitmap</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>xmm_input</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <spinlocks>4095</spinlocks>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <stimer_direct>on</stimer_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hyperv>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <launchSecurity supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='sectype'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tdx</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </launchSecurity>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </features>
Dec 13 07:29:20 compute-0 nova_compute[241222]: </domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.269 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 13 07:29:20 compute-0 nova_compute[241222]: <domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <domain>kvm</domain>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <arch>x86_64</arch>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <vcpu max='240'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <iothreads supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <os supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='firmware'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <loader supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>rom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pflash</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='readonly'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>yes</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='secure'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>no</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </loader>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </os>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-passthrough' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='hostPassthroughMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='maximum' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='maximumMigratable'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>on</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>off</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='host-model' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model fallback='forbid'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <vendor>AMD</vendor>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <maxphysaddr mode='passthrough' limit='48'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='x2apic'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='hypervisor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vaes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vpclmulqdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='stibp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='overflow-recov'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='succor'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lbrv'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='tsc-scale'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='flushbyasid'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pause-filter'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='pfthreshold'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='v-vmsave-vmload'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='vgif'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <mode name='custom' supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Broadwell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Cooperlake-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Denverton-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='auto-ibrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='EPYC-Milan-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amd-psfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='no-nested-data-bp'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='null-sel-clr-base'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='stibp-always-on'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='GraniteRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-128'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-256'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx10-512'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='prefetchiti'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Haswell-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v6'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Icelake-Server-v7'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='KnightsMill-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4fmaps'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-4vnniw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512er'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512pf'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G4-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Opteron_G5-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fma4'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tbm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xop'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SapphireRapids-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='amx-tile'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-bf16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-fp16'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512-vpopcntdq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bitalg'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vbmi2'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrc'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fzrm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='la57'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='taa-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='tsx-ldtrk'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='xfd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='SierraForest-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ifma'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-ne-convert'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx-vnni-int8'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='bus-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cmpccxadd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fbsdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='fsrs'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ibrs-all'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mcdt-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='pbrsb-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='psdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='sbdr-ssdp-no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='serialize'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Client-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='hle'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='rtm'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Skylake-Server-v5'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512bw'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512cd'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512dq'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512f'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='avx512vl'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='mpx'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v2'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v3'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='core-capability'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='split-lock-detect'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='Snowridge-v4'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='cldemote'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='gfni'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdir64b'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='movdiri'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='athlon-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='core2duo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='coreduo-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='n270-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='ss'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <blockers model='phenom-v1'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnow'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <feature name='3dnowext'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </blockers>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </mode>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </cpu>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <memoryBacking supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <enum name='sourceType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>anonymous</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <value>memfd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </memoryBacking>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <disk supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='diskDevice'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>disk</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cdrom</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>floppy</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>lun</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ide</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>fdc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>sata</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </disk>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <graphics supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vnc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egl-headless</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </graphics>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <video supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='modelType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vga</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>cirrus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>none</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>bochs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ramfb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </video>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hostdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='mode'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>subsystem</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='startupPolicy'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>mandatory</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>requisite</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>optional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='subsysType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pci</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>scsi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='capsType'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='pciBackend'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hostdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <rng supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtio-non-transitional</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>random</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>egd</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </rng>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <filesystem supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='driverType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>path</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>handle</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>virtiofs</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </filesystem>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <tpm supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-tis</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tpm-crb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emulator</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>external</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendVersion'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>2.0</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </tpm>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <redirdev supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='bus'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>usb</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </redirdev>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <channel supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </channel>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <crypto supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendModel'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>builtin</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </crypto>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <interface supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='backendType'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>default</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>passt</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </interface>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <panic supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='model'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>isa</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>hyperv</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </panic>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <console supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='type'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>null</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vc</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pty</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dev</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>file</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>pipe</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stdio</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>udp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tcp</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>unix</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>qemu-vdagent</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>dbus</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </console>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </devices>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   <features>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <gic supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <vmcoreinfo supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <genid supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backingStoreInput supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <backup supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <async-teardown supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <ps2 supported='yes'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sev supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <sgx supported='no'/>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <hyperv supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='features'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>relaxed</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vapic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>spinlocks</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vpindex</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>runtime</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>synic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>stimer</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reset</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>vendor_id</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>frequencies</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>reenlightenment</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tlbflush</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>ipi</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>avic</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>emsr_bitmap</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>xmm_input</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <spinlocks>4095</spinlocks>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <stimer_direct>on</stimer_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </defaults>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </hyperv>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     <launchSecurity supported='yes'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       <enum name='sectype'>
Dec 13 07:29:20 compute-0 nova_compute[241222]:         <value>tdx</value>
Dec 13 07:29:20 compute-0 nova_compute[241222]:       </enum>
Dec 13 07:29:20 compute-0 nova_compute[241222]:     </launchSecurity>
Dec 13 07:29:20 compute-0 nova_compute[241222]:   </features>
Dec 13 07:29:20 compute-0 nova_compute[241222]: </domainCapabilities>
Dec 13 07:29:20 compute-0 nova_compute[241222]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.312 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.312 241226 INFO nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Secure Boot support detected
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.313 241226 INFO nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.313 241226 INFO nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.320 241226 DEBUG nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.345 241226 INFO nova.virt.node [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Determined node identity 1d614cf3-e40f-4742-a628-7a61041be9be from /var/lib/nova/compute_id
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.357 241226 WARNING nova.compute.manager [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Compute nodes ['1d614cf3-e40f-4742-a628-7a61041be9be'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.377 241226 INFO nova.compute.manager [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 WARNING nova.compute.manager [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.397 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:29:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:29:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001829381' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:29:20 compute-0 nova_compute[241222]: 2025-12-13 07:29:20.805 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.013 241226 WARNING nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.014 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.014 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.014 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.030 241226 WARNING nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] No compute node record for compute-0.ctlplane.example.com:1d614cf3-e40f-4742-a628-7a61041be9be: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1d614cf3-e40f-4742-a628-7a61041be9be could not be found.
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.040 241226 INFO nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1d614cf3-e40f-4742-a628-7a61041be9be
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.082 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.082 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:29:21 compute-0 ceph-mon[74928]: pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3001829381' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:29:21 compute-0 nova_compute[241222]: 2025-12-13 07:29:21.808 241226 INFO nova.scheduler.client.report [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] [req-24f5b894-ad56-4dff-89e5-b13eb3b342df] Created resource provider record via placement API for resource provider with UUID 1d614cf3-e40f-4742-a628-7a61041be9be and name compute-0.ctlplane.example.com.
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.146 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:29:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:29:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364365511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.557 241226 DEBUG oslo_concurrency.processutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.561 241226 DEBUG nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 13 07:29:22 compute-0 nova_compute[241222]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.561 241226 INFO nova.virt.libvirt.host [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] kernel doesn't support AMD SEV
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.562 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating inventory in ProviderTree for provider 1d614cf3-e40f-4742-a628-7a61041be9be with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.562 241226 DEBUG nova.virt.libvirt.driver [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.596 241226 DEBUG nova.scheduler.client.report [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updated inventory for provider 1d614cf3-e40f-4742-a628-7a61041be9be with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.596 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating resource provider 1d614cf3-e40f-4742-a628-7a61041be9be generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.597 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating inventory in ProviderTree for provider 1d614cf3-e40f-4742-a628-7a61041be9be with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.661 241226 DEBUG nova.compute.provider_tree [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Updating resource provider 1d614cf3-e40f-4742-a628-7a61041be9be generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.677 241226 DEBUG nova.compute.resource_tracker [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.677 241226 DEBUG oslo_concurrency.lockutils [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.677 241226 DEBUG nova.service [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 13 07:29:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/364365511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.720 241226 DEBUG nova.service [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 13 07:29:22 compute-0 nova_compute[241222]: 2025-12-13 07:29:22.720 241226 DEBUG nova.servicegroup.drivers.db [None req-1590a0b9-4833-4162-8cf8-267ed53af59d - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 13 07:29:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:23 compute-0 ceph-mon[74928]: pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:24 compute-0 podman[241549]: 2025-12-13 07:29:24.721986608 +0000 UTC m=+0.062068246 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 07:29:25 compute-0 ceph-mon[74928]: pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:27 compute-0 ceph-mon[74928]: pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:29 compute-0 ceph-mon[74928]: pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:30 compute-0 ceph-mon[74928]: pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:33 compute-0 ceph-mon[74928]: pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:35 compute-0 ceph-mon[74928]: pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:36 compute-0 podman[241572]: 2025-12-13 07:29:36.695918078 +0000 UTC m=+0.037998323 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722253597' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722253597' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332556817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332556817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/2722253597' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/2722253597' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/2332556817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/2332556817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655560024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:29:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655560024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:29:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:29:38
Dec 13 07:29:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:29:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:29:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms']
Dec 13 07:29:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:29:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/3655560024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:29:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/3655560024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:29:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:29:39 compute-0 ceph-mon[74928]: pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:41 compute-0 ceph-mon[74928]: pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:29:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:29:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:29:41.638 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:29:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:29:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:29:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:43 compute-0 ceph-mon[74928]: pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:45 compute-0 ceph-mon[74928]: pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:45 compute-0 podman[241589]: 2025-12-13 07:29:45.685656285 +0000 UTC m=+0.031276081 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 07:29:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:47 compute-0 ceph-mon[74928]: pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:29:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:49 compute-0 ceph-mon[74928]: pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:51 compute-0 ceph-mon[74928]: pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:53 compute-0 ceph-mon[74928]: pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:55 compute-0 ceph-mon[74928]: pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:55 compute-0 podman[241606]: 2025-12-13 07:29:55.716271323 +0000 UTC m=+0.057375149 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 07:29:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:57 compute-0 ceph-mon[74928]: pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:29:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:29:59 compute-0 ceph-mon[74928]: pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:01 compute-0 ceph-mon[74928]: pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:01 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 07:30:01 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344088898' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 07:30:01 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 07:30:01 compute-0 ceph-mgr[75200]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 07:30:01 compute-0 ceph-mgr[75200]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 07:30:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:02 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/344088898' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 07:30:02 compute-0 ceph-mon[74928]: from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 07:30:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:03 compute-0 ceph-mon[74928]: pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:05 compute-0 ceph-mon[74928]: pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:07 compute-0 ceph-mon[74928]: pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:07 compute-0 podman[241629]: 2025-12-13 07:30:07.71693384 +0000 UTC m=+0.051264779 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:30:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 07:30:08 compute-0 sudo[241646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:30:08 compute-0 sudo[241646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:08 compute-0 sudo[241646]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:08 compute-0 sudo[241671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:30:08 compute-0 sudo[241671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:30:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:30:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:30:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:30:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:30:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:30:09 compute-0 sudo[241671]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:30:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:30:09 compute-0 sudo[241725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:30:09 compute-0 sudo[241725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:09 compute-0 sudo[241725]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:09 compute-0 sudo[241750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:30:09 compute-0 sudo[241750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:09 compute-0 ceph-mon[74928]: pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 07:30:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:30:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:30:09 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.633159706 +0000 UTC m=+0.030059296 container create a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 07:30:09 compute-0 systemd[1]: Started libpod-conmon-a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6.scope.
Dec 13 07:30:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.692671251 +0000 UTC m=+0.089570851 container init a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.697235613 +0000 UTC m=+0.094135203 container start a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.699200398 +0000 UTC m=+0.096099988 container attach a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:30:09 compute-0 affectionate_mcclintock[241798]: 167 167
Dec 13 07:30:09 compute-0 systemd[1]: libpod-a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6.scope: Deactivated successfully.
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.701301559 +0000 UTC m=+0.098201149 container died a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6ed6aa34c9f31df2d09b0bc746a9ddc5b01d7c69a416250d08baf1f3e0f03bb-merged.mount: Deactivated successfully.
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.622745152 +0000 UTC m=+0.019644762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:30:09 compute-0 podman[241785]: 2025-12-13 07:30:09.721560524 +0000 UTC m=+0.118460113 container remove a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mcclintock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:30:09 compute-0 systemd[1]: libpod-conmon-a0d4b7ca48b81fbf7912b56b53a4f8610a1ae222de60a88e12443676fe1cb8b6.scope: Deactivated successfully.
Dec 13 07:30:09 compute-0 podman[241819]: 2025-12-13 07:30:09.842784984 +0000 UTC m=+0.028489914 container create 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:30:09 compute-0 systemd[1]: Started libpod-conmon-51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688.scope.
Dec 13 07:30:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:09 compute-0 podman[241819]: 2025-12-13 07:30:09.905948406 +0000 UTC m=+0.091653366 container init 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:30:09 compute-0 podman[241819]: 2025-12-13 07:30:09.915004856 +0000 UTC m=+0.100709787 container start 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:30:09 compute-0 podman[241819]: 2025-12-13 07:30:09.91686296 +0000 UTC m=+0.102567911 container attach 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:30:09 compute-0 podman[241819]: 2025-12-13 07:30:09.83082816 +0000 UTC m=+0.016533090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:30:10 compute-0 angry_turing[241832]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:30:10 compute-0 angry_turing[241832]: --> All data devices are unavailable
Dec 13 07:30:10 compute-0 systemd[1]: libpod-51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688.scope: Deactivated successfully.
Dec 13 07:30:10 compute-0 podman[241819]: 2025-12-13 07:30:10.279537972 +0000 UTC m=+0.465242903 container died 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2ec719fb0fe5fda83b0d78207509324dde0821cecc9c4c29d7e601d9aa6a4d-merged.mount: Deactivated successfully.
Dec 13 07:30:10 compute-0 podman[241819]: 2025-12-13 07:30:10.304271271 +0000 UTC m=+0.489976201 container remove 51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 07:30:10 compute-0 systemd[1]: libpod-conmon-51c6ed31266dfc2fa795a7aef0bcabac4fa16918fa6377fd9c0c19a56a580688.scope: Deactivated successfully.
Dec 13 07:30:10 compute-0 sudo[241750]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:10 compute-0 sudo[241861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:30:10 compute-0 sudo[241861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:10 compute-0 sudo[241861]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:10 compute-0 sudo[241886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:30:10 compute-0 sudo[241886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.638640498 +0000 UTC m=+0.027417547 container create 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:30:10 compute-0 systemd[1]: Started libpod-conmon-3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da.scope.
Dec 13 07:30:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.68932781 +0000 UTC m=+0.078104859 container init 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.693647804 +0000 UTC m=+0.082424853 container start 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.696355415 +0000 UTC m=+0.085132484 container attach 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:30:10 compute-0 relaxed_pike[241934]: 167 167
Dec 13 07:30:10 compute-0 systemd[1]: libpod-3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da.scope: Deactivated successfully.
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.69738427 +0000 UTC m=+0.086161329 container died 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3305bb84b43bc11149e1e3794236eb39f02fdc1eadad8d5e2481ce3aa03c13db-merged.mount: Deactivated successfully.
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.713722512 +0000 UTC m=+0.102499561 container remove 3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 07:30:10 compute-0 podman[241921]: 2025-12-13 07:30:10.627901394 +0000 UTC m=+0.016678464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:30:10 compute-0 systemd[1]: libpod-conmon-3530185b6d725fab07c95f88318315004981a8d9fa9e242f5d4230fb88b785da.scope: Deactivated successfully.
Dec 13 07:30:10 compute-0 podman[241956]: 2025-12-13 07:30:10.833066376 +0000 UTC m=+0.029229024 container create 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:30:10 compute-0 systemd[1]: Started libpod-conmon-1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a.scope.
Dec 13 07:30:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:10 compute-0 podman[241956]: 2025-12-13 07:30:10.885492968 +0000 UTC m=+0.081655627 container init 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:30:10 compute-0 podman[241956]: 2025-12-13 07:30:10.890879748 +0000 UTC m=+0.087042395 container start 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:30:10 compute-0 podman[241956]: 2025-12-13 07:30:10.89246063 +0000 UTC m=+0.088623299 container attach 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:30:10 compute-0 podman[241956]: 2025-12-13 07:30:10.820980139 +0000 UTC m=+0.017142787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]: {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:     "0": [
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:         {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "devices": [
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "/dev/loop3"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             ],
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_name": "ceph_lv0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_size": "21470642176",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "name": "ceph_lv0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "tags": {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cluster_name": "ceph",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.crush_device_class": "",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.encrypted": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.objectstore": "bluestore",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osd_id": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.type": "block",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.vdo": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.with_tpm": "0"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             },
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "type": "block",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "vg_name": "ceph_vg0"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:         }
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:     ],
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:     "1": [
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:         {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "devices": [
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "/dev/loop4"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             ],
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_name": "ceph_lv1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_size": "21470642176",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "name": "ceph_lv1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "tags": {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cluster_name": "ceph",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.crush_device_class": "",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.encrypted": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.objectstore": "bluestore",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osd_id": "1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.type": "block",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.vdo": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.with_tpm": "0"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             },
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "type": "block",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "vg_name": "ceph_vg1"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:         }
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:     ],
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:     "2": [
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:         {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "devices": [
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "/dev/loop5"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             ],
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_name": "ceph_lv2",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_size": "21470642176",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "name": "ceph_lv2",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "tags": {
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.cluster_name": "ceph",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.crush_device_class": "",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.encrypted": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.objectstore": "bluestore",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osd_id": "2",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.type": "block",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.vdo": "0",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:                 "ceph.with_tpm": "0"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             },
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "type": "block",
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:             "vg_name": "ceph_vg2"
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:         }
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]:     ]
Dec 13 07:30:11 compute-0 inspiring_ritchie[241969]: }
Dec 13 07:30:11 compute-0 systemd[1]: libpod-1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a.scope: Deactivated successfully.
Dec 13 07:30:11 compute-0 podman[241956]: 2025-12-13 07:30:11.132341962 +0000 UTC m=+0.328504600 container died 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e29ce31f1b062a0724032a9521600a449645ee3a3f1cfb48fc11a1a6cdf0d457-merged.mount: Deactivated successfully.
Dec 13 07:30:11 compute-0 podman[241956]: 2025-12-13 07:30:11.157545535 +0000 UTC m=+0.353708184 container remove 1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:30:11 compute-0 systemd[1]: libpod-conmon-1530190189d19b9e9331d44dcd2939a6bbfbcc4064c6c192bc661bc66d8eaf7a.scope: Deactivated successfully.
Dec 13 07:30:11 compute-0 sudo[241886]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:11 compute-0 sudo[241989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:30:11 compute-0 sudo[241989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:11 compute-0 sudo[241989]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:11 compute-0 sudo[242014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:30:11 compute-0 sudo[242014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.494229615 +0000 UTC m=+0.026963974 container create 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:30:11 compute-0 systemd[1]: Started libpod-conmon-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope.
Dec 13 07:30:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.551513893 +0000 UTC m=+0.084248262 container init 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.557954082 +0000 UTC m=+0.090688432 container start 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.559217217 +0000 UTC m=+0.091951567 container attach 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:30:11 compute-0 sleepy_rhodes[242062]: 167 167
Dec 13 07:30:11 compute-0 systemd[1]: libpod-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope: Deactivated successfully.
Dec 13 07:30:11 compute-0 conmon[242062]: conmon 01de4ee1c5c1106617ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope/container/memory.events
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.562008847 +0000 UTC m=+0.094743196 container died 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.578715262 +0000 UTC m=+0.111449611 container remove 01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:30:11 compute-0 podman[242049]: 2025-12-13 07:30:11.483577284 +0000 UTC m=+0.016311654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:30:11 compute-0 systemd[1]: libpod-conmon-01de4ee1c5c1106617ac36765fbf1f747b57ebddb3d7e131d4a2334dbbac96e6.scope: Deactivated successfully.
Dec 13 07:30:11 compute-0 ceph-mon[74928]: pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d982fdeea4753387e880d4e9518c86bf4d2a37bccf3b49bd78866ce05646d82-merged.mount: Deactivated successfully.
Dec 13 07:30:11 compute-0 podman[242084]: 2025-12-13 07:30:11.701989236 +0000 UTC m=+0.028741978 container create a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 07:30:11 compute-0 systemd[1]: Started libpod-conmon-a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d.scope.
Dec 13 07:30:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:30:11 compute-0 podman[242084]: 2025-12-13 07:30:11.755597991 +0000 UTC m=+0.082350724 container init a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 07:30:11 compute-0 podman[242084]: 2025-12-13 07:30:11.760636054 +0000 UTC m=+0.087388787 container start a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:30:11 compute-0 podman[242084]: 2025-12-13 07:30:11.762022582 +0000 UTC m=+0.088775324 container attach a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 07:30:11 compute-0 podman[242084]: 2025-12-13 07:30:11.690363905 +0000 UTC m=+0.017116657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:30:12 compute-0 lvm[242175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:30:12 compute-0 lvm[242174]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:30:12 compute-0 lvm[242174]: VG ceph_vg0 finished
Dec 13 07:30:12 compute-0 lvm[242175]: VG ceph_vg1 finished
Dec 13 07:30:12 compute-0 lvm[242178]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:30:12 compute-0 lvm[242178]: VG ceph_vg2 finished
Dec 13 07:30:12 compute-0 recursing_brattain[242097]: {}
Dec 13 07:30:12 compute-0 systemd[1]: libpod-a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d.scope: Deactivated successfully.
Dec 13 07:30:12 compute-0 podman[242084]: 2025-12-13 07:30:12.374469406 +0000 UTC m=+0.701222168 container died a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 07:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a85522dfc814a6a0d218e3901948a6458026550e91b6a01ee720c53e4b3ef6e-merged.mount: Deactivated successfully.
Dec 13 07:30:12 compute-0 podman[242084]: 2025-12-13 07:30:12.399715017 +0000 UTC m=+0.726467749 container remove a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:30:12 compute-0 systemd[1]: libpod-conmon-a815f79513e526e099b42018146626b811725e5a9bd97c987281a6a873ce704d.scope: Deactivated successfully.
Dec 13 07:30:12 compute-0 sudo[242014]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:30:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:30:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:30:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:30:12 compute-0 sudo[242189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:30:12 compute-0 sudo[242189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:30:12 compute-0 sudo[242189]: pam_unix(sudo:session): session closed for user root
Dec 13 07:30:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:30:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:30:13 compute-0 ceph-mon[74928]: pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:15 compute-0 ceph-mon[74928]: pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 07:30:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4262892468' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 07:30:16 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14334 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 07:30:16 compute-0 ceph-mgr[75200]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 07:30:16 compute-0 ceph-mgr[75200]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 07:30:16 compute-0 podman[242214]: 2025-12-13 07:30:16.698313016 +0000 UTC m=+0.040935736 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 07:30:17 compute-0 ceph-mon[74928]: pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:17 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/4262892468' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 07:30:17 compute-0 ceph-mon[74928]: from='client.14334 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 07:30:17 compute-0 nova_compute[241222]: 2025-12-13 07:30:17.721 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:17 compute-0 nova_compute[241222]: 2025-12-13 07:30:17.737 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:19 compute-0 ceph-mon[74928]: pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.570 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.579 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.579 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.579 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.580 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.581 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.596 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.597 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:30:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:30:19 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802453350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:30:19 compute-0 nova_compute[241222]: 2025-12-13 07:30:19.996 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.204 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.205 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.206 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.206 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.261 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.261 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.273 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:30:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 07:30:20 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/802453350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:30:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:30:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2029954754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.683 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.687 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.710 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.711 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:30:20 compute-0 nova_compute[241222]: 2025-12-13 07:30:20.711 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:30:21 compute-0 ceph-mon[74928]: pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 07:30:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2029954754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:30:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:23 compute-0 ceph-mon[74928]: pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:25 compute-0 ceph-mon[74928]: pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:26 compute-0 podman[242274]: 2025-12-13 07:30:26.714991848 +0000 UTC m=+0.057272566 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 07:30:27 compute-0 ceph-mon[74928]: pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:29 compute-0 ceph-mon[74928]: pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:31 compute-0 ceph-mon[74928]: pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:33 compute-0 ceph-mon[74928]: pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:35 compute-0 ceph-mon[74928]: pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:37 compute-0 ceph-mon[74928]: pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:30:38
Dec 13 07:30:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:30:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:30:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Dec 13 07:30:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:30:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:38 compute-0 podman[242297]: 2025-12-13 07:30:38.69816239 +0000 UTC m=+0.040051613 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:30:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:30:39 compute-0 ceph-mon[74928]: pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:41 compute-0 ceph-mon[74928]: pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:30:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:30:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:30:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:30:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:30:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:30:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:43 compute-0 ceph-mon[74928]: pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:45 compute-0 ceph-mon[74928]: pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:47 compute-0 ceph-mon[74928]: pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:47 compute-0 podman[242314]: 2025-12-13 07:30:47.69297754 +0000 UTC m=+0.036013871 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec 13 07:30:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.803558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047803623, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1705, "num_deletes": 250, "total_data_size": 2818792, "memory_usage": 2855192, "flush_reason": "Manual Compaction"}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047808082, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1607824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11728, "largest_seqno": 13432, "table_properties": {"data_size": 1602211, "index_size": 2753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14431, "raw_average_key_size": 20, "raw_value_size": 1589695, "raw_average_value_size": 2226, "num_data_blocks": 127, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610861, "oldest_key_time": 1765610861, "file_creation_time": 1765611047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4538 microseconds, and 3313 cpu microseconds.
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808104) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1607824 bytes OK
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808117) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808630) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808642) EVENT_LOG_v1 {"time_micros": 1765611047808639, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.808651) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2811479, prev total WAL file size 2811479, number of live WAL files 2.
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.809225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1570KB)], [29(8027KB)]
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047809257, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9828122, "oldest_snapshot_seqno": -1}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3980 keys, 7643997 bytes, temperature: kUnknown
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047824570, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7643997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7615665, "index_size": 17278, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 94801, "raw_average_key_size": 23, "raw_value_size": 7542171, "raw_average_value_size": 1895, "num_data_blocks": 754, "num_entries": 3980, "num_filter_entries": 3980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765611047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.824697) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7643997 bytes
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.825081) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 640.4 rd, 498.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 4405, records dropped: 425 output_compression: NoCompression
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.825096) EVENT_LOG_v1 {"time_micros": 1765611047825088, "job": 12, "event": "compaction_finished", "compaction_time_micros": 15348, "compaction_time_cpu_micros": 12689, "output_level": 6, "num_output_files": 1, "total_output_size": 7643997, "num_input_records": 4405, "num_output_records": 3980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047825322, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611047826320, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.809164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:30:47 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:30:47.826342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:30:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:48 compute-0 ceph-mon[74928]: pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:51 compute-0 ceph-mon[74928]: pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:53 compute-0 ceph-mon[74928]: pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:55 compute-0 ceph-mon[74928]: pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:57 compute-0 ceph-mon[74928]: pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:57 compute-0 podman[242330]: 2025-12-13 07:30:57.716187818 +0000 UTC m=+0.058325525 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 07:30:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:30:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:30:59 compute-0 ceph-mon[74928]: pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:01 compute-0 ceph-mon[74928]: pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:03 compute-0 ceph-mon[74928]: pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:05 compute-0 ceph-mon[74928]: pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:07 compute-0 ceph-mon[74928]: pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:31:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:31:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:31:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:31:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:31:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:31:09 compute-0 ceph-mon[74928]: pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:09 compute-0 podman[242353]: 2025-12-13 07:31:09.697209383 +0000 UTC m=+0.036939963 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:31:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:11 compute-0 ceph-mon[74928]: pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:12 compute-0 sudo[242370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:31:12 compute-0 sudo[242370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:12 compute-0 sudo[242370]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:12 compute-0 sudo[242395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:31:12 compute-0 sudo[242395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:12 compute-0 sudo[242395]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:31:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:31:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:31:13 compute-0 sudo[242449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:31:13 compute-0 sudo[242449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:13 compute-0 sudo[242449]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:13 compute-0 sudo[242474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:31:13 compute-0 sudo[242474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.287043225 +0000 UTC m=+0.026529087 container create 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:31:13 compute-0 systemd[1]: Started libpod-conmon-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope.
Dec 13 07:31:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.341339955 +0000 UTC m=+0.080825816 container init 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.345859524 +0000 UTC m=+0.085345385 container start 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.347022189 +0000 UTC m=+0.086508051 container attach 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:31:13 compute-0 optimistic_beaver[242523]: 167 167
Dec 13 07:31:13 compute-0 systemd[1]: libpod-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope: Deactivated successfully.
Dec 13 07:31:13 compute-0 conmon[242523]: conmon 76899710537dcff46b45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope/container/memory.events
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.350562718 +0000 UTC m=+0.090048578 container died 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7d87d158f9c036a2747833ad73b7a2fd3fcfd62f622219b960daf4752e1569f-merged.mount: Deactivated successfully.
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.372004897 +0000 UTC m=+0.111490759 container remove 76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:31:13 compute-0 podman[242509]: 2025-12-13 07:31:13.27574985 +0000 UTC m=+0.015235731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:31:13 compute-0 systemd[1]: libpod-conmon-76899710537dcff46b45a231e73be8c9953e8e3a73684d64542bdb99ee08fee1.scope: Deactivated successfully.
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.491989456 +0000 UTC m=+0.028416164 container create 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:31:13 compute-0 systemd[1]: Started libpod-conmon-3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d.scope.
Dec 13 07:31:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.566097669 +0000 UTC m=+0.102524378 container init 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.570800221 +0000 UTC m=+0.107226931 container start 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.57193222 +0000 UTC m=+0.108358928 container attach 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.480937794 +0000 UTC m=+0.017364514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:31:13 compute-0 ceph-mon[74928]: pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:31:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:31:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:31:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:31:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:31:13 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:31:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:31:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261629339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:31:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:31:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261629339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:31:13 compute-0 hungry_bhaskara[242557]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:31:13 compute-0 hungry_bhaskara[242557]: --> All data devices are unavailable
Dec 13 07:31:13 compute-0 systemd[1]: libpod-3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d.scope: Deactivated successfully.
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.958350428 +0000 UTC m=+0.494777147 container died 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 07:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b29b3ee99d82ea9ebfeaa492d01661f1b110615d1d575750a5379ae3642a0f95-merged.mount: Deactivated successfully.
Dec 13 07:31:13 compute-0 podman[242544]: 2025-12-13 07:31:13.982358642 +0000 UTC m=+0.518785352 container remove 3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:31:13 compute-0 systemd[1]: libpod-conmon-3d84250508cfe564086d9f7366f63c72cc08034496aff18b12a3b304f362809d.scope: Deactivated successfully.
Dec 13 07:31:14 compute-0 sudo[242474]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:14 compute-0 sudo[242587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:31:14 compute-0 sudo[242587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:14 compute-0 sudo[242587]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:14 compute-0 sudo[242612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:31:14 compute-0 sudo[242612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.307247081 +0000 UTC m=+0.027526431 container create eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:31:14 compute-0 systemd[1]: Started libpod-conmon-eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b.scope.
Dec 13 07:31:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.363087853 +0000 UTC m=+0.083367203 container init eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.368399542 +0000 UTC m=+0.088678882 container start eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.369715256 +0000 UTC m=+0.089994606 container attach eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:31:14 compute-0 modest_gould[242661]: 167 167
Dec 13 07:31:14 compute-0 systemd[1]: libpod-eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b.scope: Deactivated successfully.
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.373111702 +0000 UTC m=+0.093391042 container died eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 07:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa456750402eea653060f990172567f32da9c8388ad7c9758d86c2bb63b61fb7-merged.mount: Deactivated successfully.
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.39012779 +0000 UTC m=+0.110407120 container remove eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:31:14 compute-0 podman[242648]: 2025-12-13 07:31:14.296273266 +0000 UTC m=+0.016552626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:31:14 compute-0 systemd[1]: libpod-conmon-eda81074bb97e25cae4d9b9566d4679a1578db171aa0135296c372e4e506fa6b.scope: Deactivated successfully.
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.51254917 +0000 UTC m=+0.028155565 container create c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 07:31:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:14 compute-0 systemd[1]: Started libpod-conmon-c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402.scope.
Dec 13 07:31:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.561858421 +0000 UTC m=+0.077464816 container init c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.567613322 +0000 UTC m=+0.083219707 container start c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.569003677 +0000 UTC m=+0.084610072 container attach c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.501520933 +0000 UTC m=+0.017127328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:31:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/4261629339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:31:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/4261629339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]: {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:     "0": [
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:         {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "devices": [
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "/dev/loop3"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             ],
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_name": "ceph_lv0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_size": "21470642176",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "name": "ceph_lv0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "tags": {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cluster_name": "ceph",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.crush_device_class": "",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.encrypted": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.objectstore": "bluestore",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osd_id": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.type": "block",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.vdo": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.with_tpm": "0"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             },
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "type": "block",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "vg_name": "ceph_vg0"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:         }
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:     ],
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:     "1": [
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:         {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "devices": [
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "/dev/loop4"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             ],
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_name": "ceph_lv1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_size": "21470642176",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "name": "ceph_lv1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "tags": {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cluster_name": "ceph",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.crush_device_class": "",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.encrypted": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.objectstore": "bluestore",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osd_id": "1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.type": "block",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.vdo": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.with_tpm": "0"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             },
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "type": "block",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "vg_name": "ceph_vg1"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:         }
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:     ],
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:     "2": [
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:         {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "devices": [
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "/dev/loop5"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             ],
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_name": "ceph_lv2",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_size": "21470642176",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "name": "ceph_lv2",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "tags": {
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.cluster_name": "ceph",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.crush_device_class": "",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.encrypted": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.objectstore": "bluestore",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osd_id": "2",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.type": "block",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.vdo": "0",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:                 "ceph.with_tpm": "0"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             },
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "type": "block",
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:             "vg_name": "ceph_vg2"
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:         }
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]:     ]
Dec 13 07:31:14 compute-0 tender_mcnulty[242696]: }
Dec 13 07:31:14 compute-0 systemd[1]: libpod-c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402.scope: Deactivated successfully.
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.808934341 +0000 UTC m=+0.324540736 container died c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9de35614a34bbc5461f56d2ed10d3bdf2582f6d2b21e2b62508dae1aaf12a04-merged.mount: Deactivated successfully.
Dec 13 07:31:14 compute-0 podman[242683]: 2025-12-13 07:31:14.838099764 +0000 UTC m=+0.353706159 container remove c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:31:14 compute-0 systemd[1]: libpod-conmon-c7d4d16e13cd8220931c8f06999a60bbf0d6bc90a70247a23523ed924dc5c402.scope: Deactivated successfully.
Dec 13 07:31:14 compute-0 sudo[242612]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:14 compute-0 sudo[242715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:31:14 compute-0 sudo[242715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:14 compute-0 sudo[242715]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:14 compute-0 sudo[242740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:31:14 compute-0 sudo[242740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.162701342 +0000 UTC m=+0.025743770 container create 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Dec 13 07:31:15 compute-0 systemd[1]: Started libpod-conmon-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope.
Dec 13 07:31:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.214913763 +0000 UTC m=+0.077956200 container init 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.220025846 +0000 UTC m=+0.083068272 container start 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.221194533 +0000 UTC m=+0.084236961 container attach 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:31:15 compute-0 magical_ptolemy[242788]: 167 167
Dec 13 07:31:15 compute-0 systemd[1]: libpod-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope: Deactivated successfully.
Dec 13 07:31:15 compute-0 conmon[242788]: conmon 4376159b9f85af721ef8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope/container/memory.events
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.224362049 +0000 UTC m=+0.087404496 container died 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.240205331 +0000 UTC m=+0.103247758 container remove 4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 07:31:15 compute-0 podman[242775]: 2025-12-13 07:31:15.152754199 +0000 UTC m=+0.015796645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:31:15 compute-0 systemd[1]: libpod-conmon-4376159b9f85af721ef8ca163d004656386b89fa1ebe19a833588b5dfd72e86f.scope: Deactivated successfully.
Dec 13 07:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-98c6b0153a96164287a981d159aa3c0f83436e9760a7c29f14ca13c0bd117a09-merged.mount: Deactivated successfully.
Dec 13 07:31:15 compute-0 podman[242810]: 2025-12-13 07:31:15.359962521 +0000 UTC m=+0.026684719 container create a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:31:15 compute-0 systemd[1]: Started libpod-conmon-a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c.scope.
Dec 13 07:31:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:31:15 compute-0 podman[242810]: 2025-12-13 07:31:15.424848513 +0000 UTC m=+0.091570721 container init a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 07:31:15 compute-0 podman[242810]: 2025-12-13 07:31:15.429567717 +0000 UTC m=+0.096289915 container start a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:31:15 compute-0 podman[242810]: 2025-12-13 07:31:15.430798221 +0000 UTC m=+0.097520439 container attach a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:31:15 compute-0 podman[242810]: 2025-12-13 07:31:15.349756912 +0000 UTC m=+0.016479129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:31:15 compute-0 ceph-mon[74928]: pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:15 compute-0 lvm[242901]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:31:15 compute-0 lvm[242902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:31:15 compute-0 lvm[242901]: VG ceph_vg0 finished
Dec 13 07:31:15 compute-0 lvm[242902]: VG ceph_vg1 finished
Dec 13 07:31:15 compute-0 lvm[242905]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:31:15 compute-0 lvm[242905]: VG ceph_vg2 finished
Dec 13 07:31:16 compute-0 hopeful_khorana[242824]: {}
Dec 13 07:31:16 compute-0 systemd[1]: libpod-a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c.scope: Deactivated successfully.
Dec 13 07:31:16 compute-0 podman[242810]: 2025-12-13 07:31:16.040784846 +0000 UTC m=+0.707507044 container died a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 07:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f653fe7a1d76048796942833fa5d9bee5f1e40c34907644f1091639ba40c18e-merged.mount: Deactivated successfully.
Dec 13 07:31:16 compute-0 podman[242810]: 2025-12-13 07:31:16.067309403 +0000 UTC m=+0.734031601 container remove a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_khorana, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:31:16 compute-0 systemd[1]: libpod-conmon-a9356c14b3db2c7d7c65ec8b74580e4237dcfacfe36287f5ab32ed2017fcab8c.scope: Deactivated successfully.
Dec 13 07:31:16 compute-0 sudo[242740]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:31:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:31:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:31:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:31:16 compute-0 sudo[242917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:31:16 compute-0 sudo[242917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:31:16 compute-0 sudo[242917]: pam_unix(sudo:session): session closed for user root
Dec 13 07:31:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:31:17 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:31:17 compute-0 ceph-mon[74928]: pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:18 compute-0 podman[242942]: 2025-12-13 07:31:18.696982275 +0000 UTC m=+0.040520374 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:31:19 compute-0 ceph-mon[74928]: pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.705 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.723 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.723 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.724 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.732 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.747 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:31:20 compute-0 nova_compute[241222]: 2025-12-13 07:31:20.748 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:31:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:31:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426600084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.154 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.351 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.352 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.352 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.352 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.422 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.422 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.433 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:31:21 compute-0 ceph-mon[74928]: pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3426600084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:31:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:31:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/257441879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.841 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.844 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.855 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.856 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:31:21 compute-0 nova_compute[241222]: 2025-12-13 07:31:21.857 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:31:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/257441879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:31:22 compute-0 nova_compute[241222]: 2025-12-13 07:31:22.692 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:22 compute-0 nova_compute[241222]: 2025-12-13 07:31:22.692 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:22 compute-0 nova_compute[241222]: 2025-12-13 07:31:22.692 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:22 compute-0 nova_compute[241222]: 2025-12-13 07:31:22.693 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:22 compute-0 nova_compute[241222]: 2025-12-13 07:31:22.693 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:22 compute-0 nova_compute[241222]: 2025-12-13 07:31:22.693 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:31:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:23 compute-0 ceph-mon[74928]: pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:25 compute-0 ceph-mon[74928]: pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:27 compute-0 ceph-mon[74928]: pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:28 compute-0 podman[243003]: 2025-12-13 07:31:28.712254437 +0000 UTC m=+0.055575505 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 13 07:31:29 compute-0 ceph-mon[74928]: pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:31 compute-0 ceph-mon[74928]: pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:33 compute-0 ceph-mon[74928]: pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:35 compute-0 ceph-mon[74928]: pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:36 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:31:36.986 154121 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:fb:39', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ae:1b:16:aa:9c:6c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 07:31:36 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:31:36.987 154121 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 07:31:36 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:31:36.987 154121 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=075cc82e-193d-47f2-a248-9917472f5475, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 07:31:37 compute-0 ceph-mon[74928]: pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:31:38
Dec 13 07:31:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:31:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:31:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Dec 13 07:31:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:31:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:31:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:31:39 compute-0 ceph-mon[74928]: pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:40 compute-0 podman[243026]: 2025-12-13 07:31:40.602574007 +0000 UTC m=+0.039667746 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 13 07:31:41 compute-0 ceph-mon[74928]: pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:31:41.639 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:31:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:31:41.640 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:31:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:31:41.640 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:31:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:43 compute-0 ceph-mon[74928]: pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:45 compute-0 ceph-mon[74928]: pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:47 compute-0 ceph-mon[74928]: pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:31:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:49 compute-0 ceph-mon[74928]: pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:49 compute-0 podman[243043]: 2025-12-13 07:31:49.687418935 +0000 UTC m=+0.029602383 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 07:31:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:51 compute-0 ceph-mon[74928]: pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:53 compute-0 ceph-mon[74928]: pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:55 compute-0 ceph-mon[74928]: pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:57 compute-0 ceph-mon[74928]: pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:31:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:59 compute-0 ceph-mon[74928]: pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:31:59 compute-0 podman[243059]: 2025-12-13 07:31:59.714988495 +0000 UTC m=+0.058106791 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 07:32:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:01 compute-0 ceph-mon[74928]: pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:03 compute-0 ceph-mon[74928]: pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:05 compute-0 ceph-mon[74928]: pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:07 compute-0 ceph-mon[74928]: pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:32:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:32:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:32:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:32:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:32:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:32:09 compute-0 ceph-mon[74928]: pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:10 compute-0 podman[243082]: 2025-12-13 07:32:10.695951135 +0000 UTC m=+0.039193193 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 13 07:32:11 compute-0 ceph-mon[74928]: pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:13 compute-0 ceph-mon[74928]: pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:32:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554917468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:32:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:32:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554917468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:32:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/554917468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:32:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/554917468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:32:15 compute-0 ceph-mon[74928]: pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:16 compute-0 sudo[243100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:32:16 compute-0 sudo[243100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:16 compute-0 sudo[243100]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:16 compute-0 sudo[243125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:32:16 compute-0 sudo[243125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:16 compute-0 sudo[243125]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:32:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:32:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:32:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:32:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:32:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:32:16 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:32:16 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:32:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:32:16 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:32:16 compute-0 sudo[243180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:32:16 compute-0 sudo[243180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:16 compute-0 sudo[243180]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:16 compute-0 sudo[243205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:32:16 compute-0 sudo[243205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:16 compute-0 podman[243240]: 2025-12-13 07:32:16.949514327 +0000 UTC m=+0.027146536 container create 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 07:32:16 compute-0 systemd[1]: Started libpod-conmon-94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120.scope.
Dec 13 07:32:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:32:17 compute-0 podman[243240]: 2025-12-13 07:32:17.003023503 +0000 UTC m=+0.080655712 container init 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:32:17 compute-0 podman[243240]: 2025-12-13 07:32:17.008322164 +0000 UTC m=+0.085954373 container start 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 07:32:17 compute-0 podman[243240]: 2025-12-13 07:32:17.00937295 +0000 UTC m=+0.087005159 container attach 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:32:17 compute-0 gallant_jepsen[243253]: 167 167
Dec 13 07:32:17 compute-0 systemd[1]: libpod-94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120.scope: Deactivated successfully.
Dec 13 07:32:17 compute-0 podman[243240]: 2025-12-13 07:32:17.011933444 +0000 UTC m=+0.089565652 container died 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7139f97ed23a1e85fe664990a028278c8dab2d9e1796a751266eba50248dfce2-merged.mount: Deactivated successfully.
Dec 13 07:32:17 compute-0 podman[243240]: 2025-12-13 07:32:17.0285112 +0000 UTC m=+0.106143399 container remove 94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 07:32:17 compute-0 podman[243240]: 2025-12-13 07:32:16.93775012 +0000 UTC m=+0.015382349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:32:17 compute-0 systemd[1]: libpod-conmon-94f9b2cd60605c89a09fad5f9496c713c8ccd4eebc11d199f0036e5fb16f7120.scope: Deactivated successfully.
Dec 13 07:32:17 compute-0 podman[243275]: 2025-12-13 07:32:17.147363389 +0000 UTC m=+0.028381648 container create be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:32:17 compute-0 systemd[1]: Started libpod-conmon-be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234.scope.
Dec 13 07:32:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:17 compute-0 podman[243275]: 2025-12-13 07:32:17.211505014 +0000 UTC m=+0.092523292 container init be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:32:17 compute-0 podman[243275]: 2025-12-13 07:32:17.218155927 +0000 UTC m=+0.099174185 container start be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:32:17 compute-0 podman[243275]: 2025-12-13 07:32:17.219132483 +0000 UTC m=+0.100150751 container attach be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:32:17 compute-0 podman[243275]: 2025-12-13 07:32:17.136498243 +0000 UTC m=+0.017516511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:32:17 compute-0 jolly_antonelli[243288]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:32:17 compute-0 jolly_antonelli[243288]: --> All data devices are unavailable
Dec 13 07:32:17 compute-0 systemd[1]: libpod-be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234.scope: Deactivated successfully.
Dec 13 07:32:17 compute-0 podman[243308]: 2025-12-13 07:32:17.596717365 +0000 UTC m=+0.016579179 container died be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-196206feda95f3b75c452a39a247dbd3bd197551dca3e5c975daab4a145f5cf4-merged.mount: Deactivated successfully.
Dec 13 07:32:17 compute-0 podman[243308]: 2025-12-13 07:32:17.614277747 +0000 UTC m=+0.034139561 container remove be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_antonelli, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:32:17 compute-0 systemd[1]: libpod-conmon-be3b964643cce098266f993155d5067de8547dbb2dab97824b3d5fbcc1269234.scope: Deactivated successfully.
Dec 13 07:32:17 compute-0 sudo[243205]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:17 compute-0 ceph-mon[74928]: pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:17 compute-0 sudo[243320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:32:17 compute-0 sudo[243320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:17 compute-0 sudo[243320]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:17 compute-0 sudo[243345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:32:17 compute-0 sudo[243345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:17 compute-0 podman[243380]: 2025-12-13 07:32:17.957271509 +0000 UTC m=+0.029171924 container create 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 07:32:17 compute-0 systemd[1]: Started libpod-conmon-7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5.scope.
Dec 13 07:32:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:32:18 compute-0 podman[243380]: 2025-12-13 07:32:18.003279032 +0000 UTC m=+0.075179447 container init 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:32:18 compute-0 podman[243380]: 2025-12-13 07:32:18.008224259 +0000 UTC m=+0.080124674 container start 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:32:18 compute-0 podman[243380]: 2025-12-13 07:32:18.00921978 +0000 UTC m=+0.081120195 container attach 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 07:32:18 compute-0 jovial_kare[243393]: 167 167
Dec 13 07:32:18 compute-0 systemd[1]: libpod-7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5.scope: Deactivated successfully.
Dec 13 07:32:18 compute-0 podman[243380]: 2025-12-13 07:32:18.01115614 +0000 UTC m=+0.083056555 container died 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f739e3e8752c27ef72b6064870a925084ba5946a0b4c8a3a00da55a1868346a0-merged.mount: Deactivated successfully.
Dec 13 07:32:18 compute-0 podman[243380]: 2025-12-13 07:32:18.027339895 +0000 UTC m=+0.099240310 container remove 7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:32:18 compute-0 podman[243380]: 2025-12-13 07:32:17.945161962 +0000 UTC m=+0.017062397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:32:18 compute-0 systemd[1]: libpod-conmon-7cc89ae3c5706b19cfdb6ce824d1b680d8efe07c02d1bd77e5a66ca0342821d5.scope: Deactivated successfully.
Dec 13 07:32:18 compute-0 podman[243415]: 2025-12-13 07:32:18.147195912 +0000 UTC m=+0.028716167 container create a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 07:32:18 compute-0 systemd[1]: Started libpod-conmon-a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb.scope.
Dec 13 07:32:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:18 compute-0 podman[243415]: 2025-12-13 07:32:18.199924802 +0000 UTC m=+0.081445077 container init a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 07:32:18 compute-0 podman[243415]: 2025-12-13 07:32:18.204994111 +0000 UTC m=+0.086514386 container start a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:32:18 compute-0 podman[243415]: 2025-12-13 07:32:18.206205228 +0000 UTC m=+0.087725493 container attach a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:32:18 compute-0 podman[243415]: 2025-12-13 07:32:18.135683389 +0000 UTC m=+0.017203664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:32:18 compute-0 priceless_buck[243428]: {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:     "0": [
Dec 13 07:32:18 compute-0 priceless_buck[243428]:         {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "devices": [
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "/dev/loop3"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             ],
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_name": "ceph_lv0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_size": "21470642176",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "name": "ceph_lv0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "tags": {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cluster_name": "ceph",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.crush_device_class": "",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.encrypted": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.objectstore": "bluestore",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osd_id": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.type": "block",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.vdo": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.with_tpm": "0"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             },
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "type": "block",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "vg_name": "ceph_vg0"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:         }
Dec 13 07:32:18 compute-0 priceless_buck[243428]:     ],
Dec 13 07:32:18 compute-0 priceless_buck[243428]:     "1": [
Dec 13 07:32:18 compute-0 priceless_buck[243428]:         {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "devices": [
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "/dev/loop4"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             ],
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_name": "ceph_lv1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_size": "21470642176",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "name": "ceph_lv1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "tags": {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cluster_name": "ceph",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.crush_device_class": "",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.encrypted": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.objectstore": "bluestore",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osd_id": "1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.type": "block",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.vdo": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.with_tpm": "0"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             },
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "type": "block",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "vg_name": "ceph_vg1"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:         }
Dec 13 07:32:18 compute-0 priceless_buck[243428]:     ],
Dec 13 07:32:18 compute-0 priceless_buck[243428]:     "2": [
Dec 13 07:32:18 compute-0 priceless_buck[243428]:         {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "devices": [
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "/dev/loop5"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             ],
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_name": "ceph_lv2",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_size": "21470642176",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "name": "ceph_lv2",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "tags": {
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.cluster_name": "ceph",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.crush_device_class": "",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.encrypted": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.objectstore": "bluestore",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osd_id": "2",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.type": "block",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.vdo": "0",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:                 "ceph.with_tpm": "0"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             },
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "type": "block",
Dec 13 07:32:18 compute-0 priceless_buck[243428]:             "vg_name": "ceph_vg2"
Dec 13 07:32:18 compute-0 priceless_buck[243428]:         }
Dec 13 07:32:18 compute-0 priceless_buck[243428]:     ]
Dec 13 07:32:18 compute-0 priceless_buck[243428]: }
Dec 13 07:32:18 compute-0 systemd[1]: libpod-a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb.scope: Deactivated successfully.
Dec 13 07:32:18 compute-0 podman[243437]: 2025-12-13 07:32:18.475298607 +0000 UTC m=+0.017398049 container died a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b72566f2673731037b2f685013051ddf2363a9eee9b8f3b5d046228748ed0eb-merged.mount: Deactivated successfully.
Dec 13 07:32:18 compute-0 podman[243437]: 2025-12-13 07:32:18.496181656 +0000 UTC m=+0.038281089 container remove a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 07:32:18 compute-0 systemd[1]: libpod-conmon-a9b56310117cdf37207cc6838efaea3c9c98e4fc5893d925db4a33dcf7f15ceb.scope: Deactivated successfully.
Dec 13 07:32:18 compute-0 sudo[243345]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:18 compute-0 sudo[243449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:32:18 compute-0 sudo[243449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:18 compute-0 sudo[243449]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:18 compute-0 sudo[243474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:32:18 compute-0 sudo[243474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.837001571 +0000 UTC m=+0.030288663 container create c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:32:18 compute-0 systemd[1]: Started libpod-conmon-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope.
Dec 13 07:32:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.888846599 +0000 UTC m=+0.082133680 container init c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.893771497 +0000 UTC m=+0.087058579 container start c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.894905729 +0000 UTC m=+0.088192831 container attach c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:32:18 compute-0 adoring_faraday[243523]: 167 167
Dec 13 07:32:18 compute-0 systemd[1]: libpod-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope: Deactivated successfully.
Dec 13 07:32:18 compute-0 conmon[243523]: conmon c1661b5e4b9ad9f6728b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope/container/memory.events
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.897926889 +0000 UTC m=+0.091213972 container died c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.913899457 +0000 UTC m=+0.107186539 container remove c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_faraday, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 07:32:18 compute-0 podman[243510]: 2025-12-13 07:32:18.824073747 +0000 UTC m=+0.017360849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:32:18 compute-0 systemd[1]: libpod-conmon-c1661b5e4b9ad9f6728bbea9b27f3c9c2f7cafe246fe2e94e8034dc8485b41df.scope: Deactivated successfully.
Dec 13 07:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7efabf4990f8014512806d641984150a0b2523b1b161520119852171d7051f8-merged.mount: Deactivated successfully.
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.035594481 +0000 UTC m=+0.027379605 container create a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:32:19 compute-0 systemd[1]: Started libpod-conmon-a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4.scope.
Dec 13 07:32:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.096970066 +0000 UTC m=+0.088755199 container init a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.101934688 +0000 UTC m=+0.093719803 container start a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.103004981 +0000 UTC m=+0.094790095 container attach a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.024962603 +0000 UTC m=+0.016747727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:32:19 compute-0 lvm[243635]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:32:19 compute-0 lvm[243634]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:32:19 compute-0 lvm[243635]: VG ceph_vg1 finished
Dec 13 07:32:19 compute-0 lvm[243634]: VG ceph_vg0 finished
Dec 13 07:32:19 compute-0 lvm[243638]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:32:19 compute-0 lvm[243638]: VG ceph_vg2 finished
Dec 13 07:32:19 compute-0 ceph-mon[74928]: pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:19 compute-0 boring_montalcini[243557]: {}
Dec 13 07:32:19 compute-0 systemd[1]: libpod-a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4.scope: Deactivated successfully.
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.741887626 +0000 UTC m=+0.733672741 container died a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7cbbe0e5282f9e05c9072323aba0aa9e9516fd35e17809307dc28b8b20c4fe-merged.mount: Deactivated successfully.
Dec 13 07:32:19 compute-0 podman[243544]: 2025-12-13 07:32:19.769617308 +0000 UTC m=+0.761402421 container remove a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 07:32:19 compute-0 podman[243640]: 2025-12-13 07:32:19.778970271 +0000 UTC m=+0.055823659 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 07:32:19 compute-0 systemd[1]: libpod-conmon-a6d372d62dde1ba22ae0ddebccd8605050245e88cc51c010a1bc44e9557f0ab4.scope: Deactivated successfully.
Dec 13 07:32:19 compute-0 sudo[243474]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:32:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:32:19 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:32:19 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:32:19 compute-0 sudo[243668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:32:19 compute-0 sudo[243668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:32:19 compute-0 sudo[243668]: pam_unix(sudo:session): session closed for user root
Dec 13 07:32:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.588 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.589 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:32:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:32:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:32:20 compute-0 ceph-mon[74928]: pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:32:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322650243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:32:20 compute-0 nova_compute[241222]: 2025-12-13 07:32:20.995 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.199 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.200 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5190MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.200 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.200 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.249 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.249 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.266 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:32:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:32:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730053541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.677 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.681 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.694 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.695 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:32:21 compute-0 nova_compute[241222]: 2025-12-13 07:32:21.695 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:32:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1322650243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:32:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/730053541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:32:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.694 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.695 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.695 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.695 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.709 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.709 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:22 compute-0 nova_compute[241222]: 2025-12-13 07:32:22.710 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:32:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:22 compute-0 ceph-mon[74928]: pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:23 compute-0 nova_compute[241222]: 2025-12-13 07:32:23.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:32:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:25 compute-0 ceph-mon[74928]: pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:27 compute-0 ceph-mon[74928]: pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:29 compute-0 ceph-mon[74928]: pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:30 compute-0 podman[243737]: 2025-12-13 07:32:30.717940906 +0000 UTC m=+0.060309982 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:32:31 compute-0 ceph-mon[74928]: pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:33 compute-0 ceph-mon[74928]: pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:35 compute-0 ceph-mon[74928]: pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:37 compute-0 ceph-mon[74928]: pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:32:38
Dec 13 07:32:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:32:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:32:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log']
Dec 13 07:32:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:32:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:32:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:32:39 compute-0 ceph-mon[74928]: pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:41 compute-0 ceph-mon[74928]: pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:32:41.641 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:32:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:32:41.641 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:32:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:32:41.641 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:32:41 compute-0 podman[243760]: 2025-12-13 07:32:41.697814816 +0000 UTC m=+0.038623451 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 07:32:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:43 compute-0 ceph-mon[74928]: pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:45 compute-0 ceph-mon[74928]: pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:47 compute-0 ceph-mon[74928]: pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:32:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:49 compute-0 ceph-mon[74928]: pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:50 compute-0 podman[243777]: 2025-12-13 07:32:50.693124882 +0000 UTC m=+0.036250739 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 07:32:51 compute-0 ceph-mon[74928]: pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.816894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172816913, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1489, "num_deletes": 506, "total_data_size": 1912352, "memory_usage": 1945872, "flush_reason": "Manual Compaction"}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172821373, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1883183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13433, "largest_seqno": 14921, "table_properties": {"data_size": 1876640, "index_size": 3234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 15986, "raw_average_key_size": 18, "raw_value_size": 1861643, "raw_average_value_size": 2115, "num_data_blocks": 148, "num_entries": 880, "num_filter_entries": 880, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611048, "oldest_key_time": 1765611048, "file_creation_time": 1765611172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 4502 microseconds, and 3354 cpu microseconds.
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821395) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1883183 bytes OK
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821406) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821732) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821743) EVENT_LOG_v1 {"time_micros": 1765611172821740, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.821751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1904735, prev total WAL file size 1904735, number of live WAL files 2.
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.822110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1839KB)], [32(7464KB)]
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172822129, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9527180, "oldest_snapshot_seqno": -1}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3835 keys, 7496313 bytes, temperature: kUnknown
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172836516, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7496313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7468817, "index_size": 16826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93842, "raw_average_key_size": 24, "raw_value_size": 7397485, "raw_average_value_size": 1928, "num_data_blocks": 715, "num_entries": 3835, "num_filter_entries": 3835, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765611172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.836610) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7496313 bytes
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.836921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 661.2 rd, 520.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.3 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.0) write-amplify(4.0) OK, records in: 4860, records dropped: 1025 output_compression: NoCompression
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.836933) EVENT_LOG_v1 {"time_micros": 1765611172836928, "job": 14, "event": "compaction_finished", "compaction_time_micros": 14409, "compaction_time_cpu_micros": 11781, "output_level": 6, "num_output_files": 1, "total_output_size": 7496313, "num_input_records": 4860, "num_output_records": 3835, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172837171, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611172837945, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.822069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:32:52 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:32:52.837986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:32:53 compute-0 ceph-mon[74928]: pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:55 compute-0 ceph-mon[74928]: pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:57 compute-0 ceph-mon[74928]: pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:32:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:32:59 compute-0 ceph-mon[74928]: pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:01 compute-0 ceph-mon[74928]: pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:01 compute-0 podman[243794]: 2025-12-13 07:33:01.713160897 +0000 UTC m=+0.056480299 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 07:33:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:03 compute-0 ceph-mon[74928]: pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:05 compute-0 ceph-mon[74928]: pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:07 compute-0 ceph-mon[74928]: pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:33:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:33:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:33:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:33:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:33:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:33:09 compute-0 ceph-mon[74928]: pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:11 compute-0 ceph-mon[74928]: pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:12 compute-0 podman[243818]: 2025-12-13 07:33:12.69257572 +0000 UTC m=+0.035035044 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 13 07:33:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:13 compute-0 ceph-mon[74928]: pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:33:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1946603155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:33:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:33:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1946603155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:33:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1946603155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:33:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1946603155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:33:15 compute-0 ceph-mon[74928]: pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:17 compute-0 ceph-mon[74928]: pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:19 compute-0 ceph-mon[74928]: pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:19 compute-0 sudo[243836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:33:19 compute-0 sudo[243836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:19 compute-0 sudo[243836]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:19 compute-0 sudo[243861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:33:19 compute-0 sudo[243861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:20 compute-0 sudo[243861]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:33:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:33:20 compute-0 sudo[243915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:33:20 compute-0 sudo[243915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:20 compute-0 sudo[243915]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:20 compute-0 sudo[243940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:33:20 compute-0 sudo[243940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.690773189 +0000 UTC m=+0.027963901 container create 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 07:33:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:33:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:33:20 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:33:20 compute-0 systemd[1]: Started libpod-conmon-5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6.scope.
Dec 13 07:33:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.734881558 +0000 UTC m=+0.072072271 container init 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.742370606 +0000 UTC m=+0.079561320 container start 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.744550726 +0000 UTC m=+0.081741459 container attach 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 07:33:20 compute-0 gracious_jang[243987]: 167 167
Dec 13 07:33:20 compute-0 systemd[1]: libpod-5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6.scope: Deactivated successfully.
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.749289244 +0000 UTC m=+0.086479957 container died 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc7de5d9df05f978a40f3da76e8b814d79265b81435a0578c1d0ea027ab74380-merged.mount: Deactivated successfully.
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.773642916 +0000 UTC m=+0.110833629 container remove 5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_jang, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:33:20 compute-0 podman[243974]: 2025-12-13 07:33:20.678957015 +0000 UTC m=+0.016147747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:33:20 compute-0 systemd[1]: libpod-conmon-5faf10471519b74dd01a26bc0c43f551f337e4f8920b329ac89e158cbf1d84f6.scope: Deactivated successfully.
Dec 13 07:33:20 compute-0 podman[243988]: 2025-12-13 07:33:20.813917108 +0000 UTC m=+0.088288216 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 07:33:20 compute-0 podman[244025]: 2025-12-13 07:33:20.896153536 +0000 UTC m=+0.028538111 container create c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:33:20 compute-0 systemd[1]: Started libpod-conmon-c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5.scope.
Dec 13 07:33:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:20 compute-0 podman[244025]: 2025-12-13 07:33:20.960768025 +0000 UTC m=+0.093152610 container init c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:33:20 compute-0 podman[244025]: 2025-12-13 07:33:20.965870959 +0000 UTC m=+0.098255534 container start c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:33:20 compute-0 podman[244025]: 2025-12-13 07:33:20.967071085 +0000 UTC m=+0.099455680 container attach c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 07:33:20 compute-0 podman[244025]: 2025-12-13 07:33:20.885016229 +0000 UTC m=+0.017400814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:33:21 compute-0 elastic_rosalind[244038]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:33:21 compute-0 elastic_rosalind[244038]: --> All data devices are unavailable
Dec 13 07:33:21 compute-0 systemd[1]: libpod-c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5.scope: Deactivated successfully.
Dec 13 07:33:21 compute-0 podman[244025]: 2025-12-13 07:33:21.330888911 +0000 UTC m=+0.463273486 container died c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-31f56333a3712a87ea85fca2a74bf286f3718b90475985cc0f5f4646d986af74-merged.mount: Deactivated successfully.
Dec 13 07:33:21 compute-0 podman[244025]: 2025-12-13 07:33:21.35334168 +0000 UTC m=+0.485726255 container remove c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rosalind, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 07:33:21 compute-0 systemd[1]: libpod-conmon-c12e148393487ffe5e8681deaa2c4b27c86c426aba87a7354eea52e07e8ac1d5.scope: Deactivated successfully.
Dec 13 07:33:21 compute-0 sudo[243940]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:21 compute-0 sudo[244068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:33:21 compute-0 sudo[244068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:21 compute-0 sudo[244068]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:21 compute-0 sudo[244093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:33:21 compute-0 sudo[244093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:21 compute-0 nova_compute[241222]: 2025-12-13 07:33:21.564 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:21 compute-0 nova_compute[241222]: 2025-12-13 07:33:21.575 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.691049232 +0000 UTC m=+0.027212158 container create c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:33:21 compute-0 ceph-mon[74928]: pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:21 compute-0 systemd[1]: Started libpod-conmon-c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c.scope.
Dec 13 07:33:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.751945811 +0000 UTC m=+0.088108738 container init c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.756114809 +0000 UTC m=+0.092277736 container start c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.757357726 +0000 UTC m=+0.093520652 container attach c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:33:21 compute-0 jolly_diffie[244143]: 167 167
Dec 13 07:33:21 compute-0 systemd[1]: libpod-c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c.scope: Deactivated successfully.
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.759571768 +0000 UTC m=+0.095734695 container died c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dab2912f3d697bbad7cb1b2e414622095da1f4eb65640202cf27a7333f844ef-merged.mount: Deactivated successfully.
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.680253988 +0000 UTC m=+0.016416924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:33:21 compute-0 podman[244129]: 2025-12-13 07:33:21.778497266 +0000 UTC m=+0.114660192 container remove c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:33:21 compute-0 systemd[1]: libpod-conmon-c7201dd2d126ea51f19f369fd318c193db2223dfef30afb7ca5736eb1ac2cd8c.scope: Deactivated successfully.
Dec 13 07:33:21 compute-0 podman[244165]: 2025-12-13 07:33:21.897925581 +0000 UTC m=+0.027997112 container create 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 13 07:33:21 compute-0 systemd[1]: Started libpod-conmon-43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e.scope.
Dec 13 07:33:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:21 compute-0 podman[244165]: 2025-12-13 07:33:21.951788469 +0000 UTC m=+0.081859990 container init 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:33:21 compute-0 podman[244165]: 2025-12-13 07:33:21.956802715 +0000 UTC m=+0.086874236 container start 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:33:21 compute-0 podman[244165]: 2025-12-13 07:33:21.961458098 +0000 UTC m=+0.091529619 container attach 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:33:21 compute-0 podman[244165]: 2025-12-13 07:33:21.88682807 +0000 UTC m=+0.016899601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]: {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:     "0": [
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:         {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "devices": [
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "/dev/loop3"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             ],
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_name": "ceph_lv0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_size": "21470642176",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "name": "ceph_lv0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "tags": {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cluster_name": "ceph",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.crush_device_class": "",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.encrypted": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.objectstore": "bluestore",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osd_id": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.type": "block",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.vdo": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.with_tpm": "0"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             },
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "type": "block",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "vg_name": "ceph_vg0"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:         }
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:     ],
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:     "1": [
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:         {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "devices": [
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "/dev/loop4"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             ],
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_name": "ceph_lv1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_size": "21470642176",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "name": "ceph_lv1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "tags": {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cluster_name": "ceph",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.crush_device_class": "",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.encrypted": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.objectstore": "bluestore",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osd_id": "1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.type": "block",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.vdo": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.with_tpm": "0"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             },
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "type": "block",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "vg_name": "ceph_vg1"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:         }
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:     ],
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:     "2": [
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:         {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "devices": [
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "/dev/loop5"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             ],
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_name": "ceph_lv2",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_size": "21470642176",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "name": "ceph_lv2",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "tags": {
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.cluster_name": "ceph",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.crush_device_class": "",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.encrypted": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.objectstore": "bluestore",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osd_id": "2",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.type": "block",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.vdo": "0",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:                 "ceph.with_tpm": "0"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             },
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "type": "block",
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:             "vg_name": "ceph_vg2"
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:         }
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]:     ]
Dec 13 07:33:22 compute-0 vigilant_lehmann[244178]: }
Dec 13 07:33:22 compute-0 systemd[1]: libpod-43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e.scope: Deactivated successfully.
Dec 13 07:33:22 compute-0 podman[244165]: 2025-12-13 07:33:22.190367818 +0000 UTC m=+0.320439349 container died 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-63666f72f3c8cba9b6e15299e53108db6c6d18a78e9ce6811887ee48f83be16c-merged.mount: Deactivated successfully.
Dec 13 07:33:22 compute-0 podman[244165]: 2025-12-13 07:33:22.213145879 +0000 UTC m=+0.343217399 container remove 43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 07:33:22 compute-0 systemd[1]: libpod-conmon-43803d7fdf18df57f308d70e8eec533e99fc28abbb0a1058cd03264717b1f87e.scope: Deactivated successfully.
Dec 13 07:33:22 compute-0 sudo[244093]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:22 compute-0 sudo[244197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:33:22 compute-0 sudo[244197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:22 compute-0 sudo[244197]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:22 compute-0 sudo[244222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:33:22 compute-0 sudo[244222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.547424555 +0000 UTC m=+0.028244499 container create 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:33:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.567 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:22 compute-0 systemd[1]: Started libpod-conmon-7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d.scope.
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.585 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:33:22 compute-0 nova_compute[241222]: 2025-12-13 07:33:22.586 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:33:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.602380327 +0000 UTC m=+0.083200290 container init 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.60747809 +0000 UTC m=+0.088298034 container start 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:33:22 compute-0 wonderful_keldysh[244271]: 167 167
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.609705858 +0000 UTC m=+0.090525822 container attach 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 07:33:22 compute-0 systemd[1]: libpod-7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d.scope: Deactivated successfully.
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.610354116 +0000 UTC m=+0.091174071 container died 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.535901612 +0000 UTC m=+0.016721576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:33:22 compute-0 podman[244257]: 2025-12-13 07:33:22.639280215 +0000 UTC m=+0.120100159 container remove 7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:33:22 compute-0 systemd[1]: libpod-conmon-7664264d0496504f9b23ae92bfe8b561f4d8f13b90b70c9dd6158cbe3d94239d.scope: Deactivated successfully.
Dec 13 07:33:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:33:22 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3360 writes, 15K keys, 3360 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3360 writes, 3360 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1302 writes, 5911 keys, 1302 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s
                                           Interval WAL: 1302 writes, 1302 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    385.8      0.04              0.03         7    0.006       0      0       0.0       0.0
                                             L6      1/0    7.15 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    549.9    450.6      0.09              0.08         6    0.016     24K   3209       0.0       0.0
                                            Sum      1/0    7.15 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    377.9    430.3      0.14              0.11        13    0.010     24K   3209       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    431.3    438.3      0.08              0.07         8    0.010     17K   2478       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    549.9    450.6      0.09              0.08         6    0.016     24K   3209       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    394.7      0.04              0.03         6    0.007       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.1 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5642ba289a30#2 capacity: 308.00 MB usage: 1.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(106,1.69 MB,0.547617%) FilterBlock(14,75.67 KB,0.023993%) IndexBlock(14,149.52 KB,0.0474063%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 07:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bda0724c9686ef73bb7e6d5f4f67342c4482b1a743aca131fa9a603aad22301-merged.mount: Deactivated successfully.
Dec 13 07:33:22 compute-0 podman[244312]: 2025-12-13 07:33:22.763983426 +0000 UTC m=+0.028941840 container create a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:33:22 compute-0 systemd[1]: Started libpod-conmon-a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675.scope.
Dec 13 07:33:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:33:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:22 compute-0 podman[244312]: 2025-12-13 07:33:22.819559965 +0000 UTC m=+0.084518399 container init a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 07:33:22 compute-0 podman[244312]: 2025-12-13 07:33:22.82403604 +0000 UTC m=+0.088994444 container start a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:33:22 compute-0 podman[244312]: 2025-12-13 07:33:22.825537032 +0000 UTC m=+0.090495457 container attach a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 07:33:22 compute-0 podman[244312]: 2025-12-13 07:33:22.751624774 +0000 UTC m=+0.016583198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:33:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:33:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396368659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.019 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.289 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.289 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.290 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.290 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.331 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.331 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.343 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:33:23 compute-0 lvm[244428]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:33:23 compute-0 lvm[244429]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:33:23 compute-0 lvm[244429]: VG ceph_vg2 finished
Dec 13 07:33:23 compute-0 lvm[244428]: VG ceph_vg1 finished
Dec 13 07:33:23 compute-0 lvm[244426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:33:23 compute-0 lvm[244426]: VG ceph_vg0 finished
Dec 13 07:33:23 compute-0 nostalgic_rosalind[244326]: {}
Dec 13 07:33:23 compute-0 systemd[1]: libpod-a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675.scope: Deactivated successfully.
Dec 13 07:33:23 compute-0 podman[244312]: 2025-12-13 07:33:23.516260203 +0000 UTC m=+0.781218617 container died a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 07:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bddb1f08094305f4436a2b4b76a69eae023e5b95d208eea55233b64cd697f4cb-merged.mount: Deactivated successfully.
Dec 13 07:33:23 compute-0 podman[244312]: 2025-12-13 07:33:23.53791433 +0000 UTC m=+0.802872734 container remove a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 13 07:33:23 compute-0 systemd[1]: libpod-conmon-a4bb956b1cc4cab743bade0d9278bb783c0e7a56446bc6d1f0305aea37a49675.scope: Deactivated successfully.
Dec 13 07:33:23 compute-0 sudo[244222]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:33:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:33:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:33:23 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:33:23 compute-0 sudo[244441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:33:23 compute-0 sudo[244441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:33:23 compute-0 sudo[244441]: pam_unix(sudo:session): session closed for user root
Dec 13 07:33:23 compute-0 ceph-mon[74928]: pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2396368659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:33:23 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:33:23 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:33:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:33:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546582947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.787 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.791 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.801 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.802 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:33:23 compute-0 nova_compute[241222]: 2025-12-13 07:33:23.803 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:33:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/546582947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.802 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.803 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.803 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:24 compute-0 nova_compute[241222]: 2025-12-13 07:33:24.813 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:33:25 compute-0 ceph-mon[74928]: pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:27 compute-0 ceph-mon[74928]: pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:29 compute-0 ceph-mon[74928]: pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:31 compute-0 ceph-mon[74928]: pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:32 compute-0 podman[244468]: 2025-12-13 07:33:32.724527229 +0000 UTC m=+0.064825387 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 07:33:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:33 compute-0 ceph-mon[74928]: pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:35 compute-0 ceph-mon[74928]: pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:37 compute-0 ceph-mon[74928]: pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:33:38
Dec 13 07:33:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:33:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:33:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'vms', '.mgr']
Dec 13 07:33:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:33:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:33:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:33:39 compute-0 ceph-mon[74928]: pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:33:41.642 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:33:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:33:41.643 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:33:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:33:41.643 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:33:41 compute-0 ceph-mon[74928]: pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:43 compute-0 podman[244492]: 2025-12-13 07:33:43.706063429 +0000 UTC m=+0.043423313 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:33:43 compute-0 ceph-mon[74928]: pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:45 compute-0 ceph-mon[74928]: pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:47 compute-0 ceph-mon[74928]: pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:33:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:49 compute-0 ceph-mon[74928]: pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:51 compute-0 podman[244509]: 2025-12-13 07:33:51.695271177 +0000 UTC m=+0.037228307 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 07:33:51 compute-0 ceph-mon[74928]: pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:52 compute-0 ceph-mon[74928]: pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:55 compute-0 ceph-mon[74928]: pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:57 compute-0 ceph-mon[74928]: pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:33:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:33:59 compute-0 ceph-mon[74928]: pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:01 compute-0 ceph-mon[74928]: pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:03 compute-0 ceph-mon[74928]: pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:03 compute-0 podman[244525]: 2025-12-13 07:34:03.711606322 +0000 UTC m=+0.055514814 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 13 07:34:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:05 compute-0 ceph-mon[74928]: pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:07 compute-0 ceph-mon[74928]: pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:34:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:34:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:34:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:34:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:34:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:34:09 compute-0 ceph-mon[74928]: pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:11 compute-0 ceph-mon[74928]: pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:34:12 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5739 writes, 24K keys, 5739 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5739 writes, 979 syncs, 5.86 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:34:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:13 compute-0 ceph-mon[74928]: pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:34:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959695089' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:34:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:34:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959695089' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:34:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1959695089' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:34:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/1959695089' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:34:14 compute-0 podman[244549]: 2025-12-13 07:34:14.705163562 +0000 UTC m=+0.041869932 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 07:34:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:34:15 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7219 writes, 28K keys, 7219 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7219 writes, 1518 syncs, 4.76 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:34:15 compute-0 ceph-mon[74928]: pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:17 compute-0 ceph-mon[74928]: pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:34:18 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5817 writes, 24K keys, 5817 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5817 writes, 955 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:34:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:19 compute-0 nova_compute[241222]: 2025-12-13 07:34:19.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:19 compute-0 nova_compute[241222]: 2025-12-13 07:34:19.569 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 07:34:19 compute-0 nova_compute[241222]: 2025-12-13 07:34:19.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 07:34:19 compute-0 nova_compute[241222]: 2025-12-13 07:34:19.581 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:19 compute-0 nova_compute[241222]: 2025-12-13 07:34:19.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 07:34:19 compute-0 nova_compute[241222]: 2025-12-13 07:34:19.588 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:19 compute-0 ceph-mon[74928]: pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:21 compute-0 ceph-mon[74928]: pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:22 compute-0 nova_compute[241222]: 2025-12-13 07:34:22.601 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:22 compute-0 nova_compute[241222]: 2025-12-13 07:34:22.618 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:34:22 compute-0 nova_compute[241222]: 2025-12-13 07:34:22.618 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:34:22 compute-0 nova_compute[241222]: 2025-12-13 07:34:22.619 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:34:22 compute-0 nova_compute[241222]: 2025-12-13 07:34:22.619 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:34:22 compute-0 nova_compute[241222]: 2025-12-13 07:34:22.619 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:34:22 compute-0 podman[244568]: 2025-12-13 07:34:22.690911516 +0000 UTC m=+0.035158867 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 07:34:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:34:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531130214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.026 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.221 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.222 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.222 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.223 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:34:23 compute-0 ceph-mgr[75200]: [devicehealth INFO root] Check health
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.402 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.402 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.456 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Refreshing inventories for resource provider 1d614cf3-e40f-4742-a628-7a61041be9be _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.511 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Updating ProviderTree inventory for provider 1d614cf3-e40f-4742-a628-7a61041be9be from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.511 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Updating inventory in ProviderTree for provider 1d614cf3-e40f-4742-a628-7a61041be9be with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.522 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Refreshing aggregate associations for resource provider 1d614cf3-e40f-4742-a628-7a61041be9be, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.539 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Refreshing trait associations for resource provider 1d614cf3-e40f-4742-a628-7a61041be9be, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX512VPCLMULQDQ,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX512VAES,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.548 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:34:23 compute-0 ceph-mon[74928]: pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3531130214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:34:23 compute-0 sudo[244626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:34:23 compute-0 sudo[244626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:23 compute-0 sudo[244626]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:23 compute-0 sudo[244651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:34:23 compute-0 sudo[244651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:34:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445481919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.956 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.960 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.971 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.972 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:34:23 compute-0 nova_compute[241222]: 2025-12-13 07:34:23.973 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:34:24 compute-0 sudo[244651]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:34:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:34:24 compute-0 sudo[244706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:34:24 compute-0 sudo[244706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:24 compute-0 sudo[244706]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:24 compute-0 sudo[244731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:34:24 compute-0 sudo[244731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.424850062 +0000 UTC m=+0.027673985 container create 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 07:34:24 compute-0 systemd[1]: Started libpod-conmon-2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf.scope.
Dec 13 07:34:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.47721322 +0000 UTC m=+0.080037163 container init 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.481971656 +0000 UTC m=+0.084795579 container start 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.484296617 +0000 UTC m=+0.087120540 container attach 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:34:24 compute-0 awesome_pasteur[244779]: 167 167
Dec 13 07:34:24 compute-0 systemd[1]: libpod-2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf.scope: Deactivated successfully.
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.48537304 +0000 UTC m=+0.088196963 container died 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fa5e36ba187816d186afdf5d596b78c1aa5ee69105b38b083eb0d79f5700567-merged.mount: Deactivated successfully.
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.504295713 +0000 UTC m=+0.107119636 container remove 2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_pasteur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:34:24 compute-0 podman[244766]: 2025-12-13 07:34:24.413658574 +0000 UTC m=+0.016482496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:34:24 compute-0 systemd[1]: libpod-conmon-2d153722389bf0223b48a191e3e283d7682d783b8023b977cb0375116634e0bf.scope: Deactivated successfully.
Dec 13 07:34:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:24 compute-0 podman[244802]: 2025-12-13 07:34:24.623415128 +0000 UTC m=+0.027842272 container create bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:34:24 compute-0 systemd[1]: Started libpod-conmon-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope.
Dec 13 07:34:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3445481919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:34:24 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:24 compute-0 podman[244802]: 2025-12-13 07:34:24.685704657 +0000 UTC m=+0.090131821 container init bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:34:24 compute-0 podman[244802]: 2025-12-13 07:34:24.691890976 +0000 UTC m=+0.096318121 container start bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:34:24 compute-0 podman[244802]: 2025-12-13 07:34:24.693213123 +0000 UTC m=+0.097640267 container attach bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:34:24 compute-0 podman[244802]: 2025-12-13 07:34:24.611694093 +0000 UTC m=+0.016121258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:34:24 compute-0 nova_compute[241222]: 2025-12-13 07:34:24.934 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:24 compute-0 nova_compute[241222]: 2025-12-13 07:34:24.936 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:24 compute-0 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:24 compute-0 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:24 compute-0 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:24 compute-0 nova_compute[241222]: 2025-12-13 07:34:24.937 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:34:25 compute-0 trusting_jang[244817]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:34:25 compute-0 trusting_jang[244817]: --> All data devices are unavailable
Dec 13 07:34:25 compute-0 systemd[1]: libpod-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope: Deactivated successfully.
Dec 13 07:34:25 compute-0 conmon[244817]: conmon bcc18f1e90f4769ce156 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope/container/memory.events
Dec 13 07:34:25 compute-0 podman[244837]: 2025-12-13 07:34:25.082190616 +0000 UTC m=+0.017422635 container died bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 07:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dd04ace1e8e5a1b68c87e9408816db5775d416061c985c764cf6fbcebb87d69-merged.mount: Deactivated successfully.
Dec 13 07:34:25 compute-0 podman[244837]: 2025-12-13 07:34:25.100997031 +0000 UTC m=+0.036229049 container remove bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:34:25 compute-0 systemd[1]: libpod-conmon-bcc18f1e90f4769ce1562424eaeec1f120f42de5ae950ce32fd553ba70e5abd0.scope: Deactivated successfully.
Dec 13 07:34:25 compute-0 sudo[244731]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:25 compute-0 sudo[244849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:34:25 compute-0 sudo[244849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:25 compute-0 sudo[244849]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:25 compute-0 sudo[244874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:34:25 compute-0 sudo[244874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.441364953 +0000 UTC m=+0.028291056 container create 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 07:34:25 compute-0 systemd[1]: Started libpod-conmon-4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844.scope.
Dec 13 07:34:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.483374327 +0000 UTC m=+0.070300439 container init 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.487537284 +0000 UTC m=+0.074463396 container start 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.488923059 +0000 UTC m=+0.075849181 container attach 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 07:34:25 compute-0 wonderful_sanderson[244922]: 167 167
Dec 13 07:34:25 compute-0 systemd[1]: libpod-4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844.scope: Deactivated successfully.
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.491810547 +0000 UTC m=+0.078736650 container died 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 07:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc6285dfad4b1cf53382164df9e54428a095a35eb6646eb51f0af2aba03cd9f0-merged.mount: Deactivated successfully.
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.508476126 +0000 UTC m=+0.095402229 container remove 4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_sanderson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:34:25 compute-0 podman[244909]: 2025-12-13 07:34:25.429223308 +0000 UTC m=+0.016149420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:34:25 compute-0 systemd[1]: libpod-conmon-4f466d6ebc043651a1faabe893dbe3dc399d601d0815d5187779ac229a695844.scope: Deactivated successfully.
Dec 13 07:34:25 compute-0 nova_compute[241222]: 2025-12-13 07:34:25.569 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:25 compute-0 nova_compute[241222]: 2025-12-13 07:34:25.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:34:25 compute-0 nova_compute[241222]: 2025-12-13 07:34:25.570 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:34:25 compute-0 nova_compute[241222]: 2025-12-13 07:34:25.634 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:34:25 compute-0 nova_compute[241222]: 2025-12-13 07:34:25.634 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.647516541 +0000 UTC m=+0.031436098 container create 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:34:25 compute-0 systemd[1]: Started libpod-conmon-8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16.scope.
Dec 13 07:34:25 compute-0 ceph-mon[74928]: pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.702280021 +0000 UTC m=+0.086199589 container init 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.707052343 +0000 UTC m=+0.090971890 container start 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.708300961 +0000 UTC m=+0.092220508 container attach 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.634901015 +0000 UTC m=+0.018820582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]: {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:     "0": [
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:         {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "devices": [
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "/dev/loop3"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             ],
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_name": "ceph_lv0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_size": "21470642176",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "name": "ceph_lv0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "tags": {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cluster_name": "ceph",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.crush_device_class": "",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.encrypted": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.objectstore": "bluestore",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osd_id": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.type": "block",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.vdo": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.with_tpm": "0"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             },
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "type": "block",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "vg_name": "ceph_vg0"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:         }
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:     ],
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:     "1": [
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:         {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "devices": [
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "/dev/loop4"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             ],
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_name": "ceph_lv1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_size": "21470642176",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "name": "ceph_lv1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "tags": {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cluster_name": "ceph",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.crush_device_class": "",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.encrypted": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.objectstore": "bluestore",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osd_id": "1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.type": "block",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.vdo": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.with_tpm": "0"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             },
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "type": "block",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "vg_name": "ceph_vg1"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:         }
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:     ],
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:     "2": [
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:         {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "devices": [
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "/dev/loop5"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             ],
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_name": "ceph_lv2",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_size": "21470642176",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "name": "ceph_lv2",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "tags": {
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.cluster_name": "ceph",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.crush_device_class": "",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.encrypted": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.objectstore": "bluestore",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osd_id": "2",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.type": "block",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.vdo": "0",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:                 "ceph.with_tpm": "0"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             },
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "type": "block",
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:             "vg_name": "ceph_vg2"
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:         }
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]:     ]
Dec 13 07:34:25 compute-0 clever_proskuriakova[244958]: }
Dec 13 07:34:25 compute-0 systemd[1]: libpod-8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16.scope: Deactivated successfully.
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.935204379 +0000 UTC m=+0.319123936 container died 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-635c276503117553931aa2438669f780896a1c5e233d7c9b9e051ac78f536a61-merged.mount: Deactivated successfully.
Dec 13 07:34:25 compute-0 podman[244944]: 2025-12-13 07:34:25.956979154 +0000 UTC m=+0.340898701 container remove 8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 07:34:25 compute-0 systemd[1]: libpod-conmon-8f46244380dad1bb10d9d80e92dc62c9b6369a151fef787843d7ad1afc5e1a16.scope: Deactivated successfully.
Dec 13 07:34:25 compute-0 sudo[244874]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:26 compute-0 sudo[244976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:34:26 compute-0 sudo[244976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:26 compute-0 sudo[244976]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:26 compute-0 sudo[245001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:34:26 compute-0 sudo[245001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.291265554 +0000 UTC m=+0.027519185 container create 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:34:26 compute-0 systemd[1]: Started libpod-conmon-6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677.scope.
Dec 13 07:34:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.343390013 +0000 UTC m=+0.079643665 container init 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.348066665 +0000 UTC m=+0.084320297 container start 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.349340761 +0000 UTC m=+0.085594392 container attach 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 07:34:26 compute-0 dreamy_yonath[245049]: 167 167
Dec 13 07:34:26 compute-0 systemd[1]: libpod-6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677.scope: Deactivated successfully.
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.35157504 +0000 UTC m=+0.087828671 container died 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.36924608 +0000 UTC m=+0.105499712 container remove 6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_yonath, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:34:26 compute-0 podman[245036]: 2025-12-13 07:34:26.279893235 +0000 UTC m=+0.016146876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:34:26 compute-0 systemd[1]: libpod-conmon-6315847e8a2701a2918d3fcbbd01a4414fdf5f60928ce47ed435f322aaae6677.scope: Deactivated successfully.
Dec 13 07:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bba7c7961500252a6b8d8253b1442d9b511de710b12eb09faa2d31c55fceff7-merged.mount: Deactivated successfully.
Dec 13 07:34:26 compute-0 podman[245071]: 2025-12-13 07:34:26.49083105 +0000 UTC m=+0.028663615 container create 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 07:34:26 compute-0 systemd[1]: Started libpod-conmon-0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32.scope.
Dec 13 07:34:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:34:26 compute-0 podman[245071]: 2025-12-13 07:34:26.546561209 +0000 UTC m=+0.084393783 container init 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:34:26 compute-0 podman[245071]: 2025-12-13 07:34:26.553952503 +0000 UTC m=+0.091785057 container start 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:34:26 compute-0 podman[245071]: 2025-12-13 07:34:26.555397139 +0000 UTC m=+0.093229694 container attach 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 07:34:26 compute-0 nova_compute[241222]: 2025-12-13 07:34:26.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:34:26 compute-0 podman[245071]: 2025-12-13 07:34:26.479524334 +0000 UTC m=+0.017356910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:34:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:27 compute-0 lvm[245159]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:34:27 compute-0 lvm[245159]: VG ceph_vg0 finished
Dec 13 07:34:27 compute-0 lvm[245162]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:34:27 compute-0 lvm[245162]: VG ceph_vg1 finished
Dec 13 07:34:27 compute-0 lvm[245165]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:34:27 compute-0 lvm[245165]: VG ceph_vg2 finished
Dec 13 07:34:27 compute-0 naughty_mahavira[245084]: {}
Dec 13 07:34:27 compute-0 systemd[1]: libpod-0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32.scope: Deactivated successfully.
Dec 13 07:34:27 compute-0 podman[245071]: 2025-12-13 07:34:27.186629952 +0000 UTC m=+0.724462507 container died 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6802ff44602123d0ceb48bff1a0dfc1d4add76caf0f2fd318bbdce5604877e76-merged.mount: Deactivated successfully.
Dec 13 07:34:27 compute-0 podman[245071]: 2025-12-13 07:34:27.212840136 +0000 UTC m=+0.750672681 container remove 0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 07:34:27 compute-0 systemd[1]: libpod-conmon-0682d19eef7e8565b16413ffc4ae0170de971148d58d8afb637da43336a84d32.scope: Deactivated successfully.
Dec 13 07:34:27 compute-0 sudo[245001]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:34:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:34:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:34:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:34:27 compute-0 sudo[245178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:34:27 compute-0 sudo[245178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:34:27 compute-0 sudo[245178]: pam_unix(sudo:session): session closed for user root
Dec 13 07:34:27 compute-0 ceph-mon[74928]: pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:27 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:34:27 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:34:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:29 compute-0 ceph-mon[74928]: pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:31 compute-0 ceph-mon[74928]: pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:33 compute-0 ceph-mon[74928]: pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:34 compute-0 podman[245203]: 2025-12-13 07:34:34.723102758 +0000 UTC m=+0.063460020 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 07:34:35 compute-0 ceph-mon[74928]: pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:37 compute-0 ceph-mon[74928]: pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:34:38
Dec 13 07:34:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:34:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:34:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root']
Dec 13 07:34:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:34:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:34:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:34:39 compute-0 ceph-mon[74928]: pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:34:41.644 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:34:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:34:41.644 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:34:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:34:41.644 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:34:41 compute-0 ceph-mon[74928]: pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:43 compute-0 ceph-mon[74928]: pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:45 compute-0 podman[245226]: 2025-12-13 07:34:45.701357209 +0000 UTC m=+0.040306013 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 07:34:45 compute-0 ceph-mon[74928]: pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:47 compute-0 ceph-mon[74928]: pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:34:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:49 compute-0 ceph-mon[74928]: pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:51 compute-0 ceph-mon[74928]: pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:53 compute-0 podman[245243]: 2025-12-13 07:34:53.692936319 +0000 UTC m=+0.035983571 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:34:53 compute-0 ceph-mon[74928]: pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:55 compute-0 ceph-mon[74928]: pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:57 compute-0 ceph-mon[74928]: pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:34:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:34:59 compute-0 ceph-mon[74928]: pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:01 compute-0 ceph-mon[74928]: pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:03 compute-0 ceph-mon[74928]: pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:05 compute-0 podman[245260]: 2025-12-13 07:35:05.712067451 +0000 UTC m=+0.054069046 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 07:35:05 compute-0 ceph-mon[74928]: pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:07 compute-0 ceph-mon[74928]: pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 07:35:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:35:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:35:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:35:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:35:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:35:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:35:09 compute-0 ceph-mon[74928]: pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 07:35:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:11 compute-0 ceph-mon[74928]: pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:35:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143608827' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:35:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:35:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143608827' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:35:13 compute-0 ceph-mon[74928]: pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/4143608827' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:35:13 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/4143608827' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:35:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:15 compute-0 ceph-mon[74928]: pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:16 compute-0 podman[245283]: 2025-12-13 07:35:16.700131469 +0000 UTC m=+0.038223442 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 07:35:17 compute-0 ceph-mon[74928]: pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:19 compute-0 ceph-mon[74928]: pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 07:35:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 07:35:20 compute-0 ceph-mon[74928]: pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.776039) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320776096, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1401, "num_deletes": 251, "total_data_size": 2215775, "memory_usage": 2261264, "flush_reason": "Manual Compaction"}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320783925, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2183762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14922, "largest_seqno": 16322, "table_properties": {"data_size": 2177253, "index_size": 3708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13496, "raw_average_key_size": 19, "raw_value_size": 2164135, "raw_average_value_size": 3150, "num_data_blocks": 170, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611173, "oldest_key_time": 1765611173, "file_creation_time": 1765611320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7905 microseconds, and 6486 cpu microseconds.
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.783954) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2183762 bytes OK
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.783968) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784312) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784324) EVENT_LOG_v1 {"time_micros": 1765611320784321, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2209581, prev total WAL file size 2209581, number of live WAL files 2.
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2132KB)], [35(7320KB)]
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320784884, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9680075, "oldest_snapshot_seqno": -1}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4008 keys, 7869657 bytes, temperature: kUnknown
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320800875, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7869657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7840711, "index_size": 17828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97911, "raw_average_key_size": 24, "raw_value_size": 7766007, "raw_average_value_size": 1937, "num_data_blocks": 756, "num_entries": 4008, "num_filter_entries": 4008, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765610001, "oldest_key_time": 0, "file_creation_time": 1765611320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3758b366-8ed2-410f-a091-1c92e1b75bd7", "db_session_id": "1EYF1QT48HSM3ZBGDMBQ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.801013) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7869657 bytes
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.801315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 604.1 rd, 491.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4522, records dropped: 514 output_compression: NoCompression
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.801331) EVENT_LOG_v1 {"time_micros": 1765611320801322, "job": 16, "event": "compaction_finished", "compaction_time_micros": 16024, "compaction_time_cpu_micros": 13279, "output_level": 6, "num_output_files": 1, "total_output_size": 7869657, "num_input_records": 4522, "num_output_records": 4008, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320801680, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611320802762, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.784778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:35:20 compute-0 ceph-mon[74928]: rocksdb: (Original Log Time 2025/12/13-07:35:20.802789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 07:35:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.588 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.589 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:35:23 compute-0 ceph-mon[74928]: pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:35:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186093368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:35:23 compute-0 nova_compute[241222]: 2025-12-13 07:35:23.993 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.202 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.203 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.204 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.204 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.247 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.247 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.258 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:35:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:35:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132218194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:35:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4186093368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.675 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.680 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.690 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.691 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:35:24 compute-0 nova_compute[241222]: 2025-12-13 07:35:24.691 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:35:24 compute-0 podman[245342]: 2025-12-13 07:35:24.699052416 +0000 UTC m=+0.040042994 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 07:35:25 compute-0 ceph-mon[74928]: pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3132218194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.688 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.698 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:25 compute-0 nova_compute[241222]: 2025-12-13 07:35:25.699 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:35:26 compute-0 nova_compute[241222]: 2025-12-13 07:35:26.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:26 compute-0 nova_compute[241222]: 2025-12-13 07:35:26.579 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:35:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:27 compute-0 sudo[245361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:35:27 compute-0 sudo[245361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:27 compute-0 sudo[245361]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:27 compute-0 sudo[245386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 07:35:27 compute-0 sudo[245386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:27 compute-0 sudo[245386]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:35:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:35:27 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:27 compute-0 ceph-mon[74928]: pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:27 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:27 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:27 compute-0 sudo[245429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:35:27 compute-0 sudo[245429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:27 compute-0 sudo[245429]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:27 compute-0 sudo[245454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:35:27 compute-0 sudo[245454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:28 compute-0 sudo[245454]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:35:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:35:28 compute-0 sudo[245508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:35:28 compute-0 sudo[245508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:28 compute-0 sudo[245508]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:28 compute-0 sudo[245533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:35:28 compute-0 sudo[245533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.431983092 +0000 UTC m=+0.029909169 container create 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:35:28 compute-0 systemd[1]: Started libpod-conmon-1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76.scope.
Dec 13 07:35:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.480922622 +0000 UTC m=+0.078848699 container init 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.486127659 +0000 UTC m=+0.084053736 container start 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.487220144 +0000 UTC m=+0.085146221 container attach 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:35:28 compute-0 flamboyant_hugle[245581]: 167 167
Dec 13 07:35:28 compute-0 systemd[1]: libpod-1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76.scope: Deactivated successfully.
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.490199195 +0000 UTC m=+0.088125272 container died 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 07:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c913734e1da22587a95389b15c181803390908c5e9e07a5ce2644c3fed9733-merged.mount: Deactivated successfully.
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.509362628 +0000 UTC m=+0.107288705 container remove 1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:35:28 compute-0 podman[245568]: 2025-12-13 07:35:28.419731194 +0000 UTC m=+0.017657282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:35:28 compute-0 systemd[1]: libpod-conmon-1032a54cdf35e581f5e2277a99372a8ea2320c9444733f167d7717ccc9f97b76.scope: Deactivated successfully.
Dec 13 07:35:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:28 compute-0 podman[245602]: 2025-12-13 07:35:28.630466548 +0000 UTC m=+0.030003276 container create 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:35:28 compute-0 systemd[1]: Started libpod-conmon-190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879.scope.
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:35:28 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:35:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:28 compute-0 podman[245602]: 2025-12-13 07:35:28.686900771 +0000 UTC m=+0.086437518 container init 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:35:28 compute-0 podman[245602]: 2025-12-13 07:35:28.69331436 +0000 UTC m=+0.092851078 container start 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 07:35:28 compute-0 podman[245602]: 2025-12-13 07:35:28.694546437 +0000 UTC m=+0.094083164 container attach 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 07:35:28 compute-0 podman[245602]: 2025-12-13 07:35:28.618901773 +0000 UTC m=+0.018438520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:35:29 compute-0 keen_ellis[245615]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:35:29 compute-0 keen_ellis[245615]: --> All data devices are unavailable
Dec 13 07:35:29 compute-0 systemd[1]: libpod-190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879.scope: Deactivated successfully.
Dec 13 07:35:29 compute-0 podman[245602]: 2025-12-13 07:35:29.069893018 +0000 UTC m=+0.469429745 container died 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb6fcc18b0769111c5fe90419aabad2a035e2544e1badaf0daa4176f0f80a56-merged.mount: Deactivated successfully.
Dec 13 07:35:29 compute-0 podman[245602]: 2025-12-13 07:35:29.090604562 +0000 UTC m=+0.490141289 container remove 190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 07:35:29 compute-0 systemd[1]: libpod-conmon-190e6acdb6888d084381e05c2f6c23ef1681d3d51b2dba40cd8c09c4ca872879.scope: Deactivated successfully.
Dec 13 07:35:29 compute-0 sudo[245533]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:29 compute-0 sudo[245644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:35:29 compute-0 sudo[245644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:29 compute-0 sudo[245644]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:29 compute-0 sudo[245669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:35:29 compute-0 sudo[245669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.430995907 +0000 UTC m=+0.029388279 container create 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 07:35:29 compute-0 systemd[1]: Started libpod-conmon-79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a.scope.
Dec 13 07:35:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.488776351 +0000 UTC m=+0.087168723 container init 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.493697604 +0000 UTC m=+0.092089976 container start 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.494751485 +0000 UTC m=+0.093143858 container attach 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 07:35:29 compute-0 friendly_kapitsa[245717]: 167 167
Dec 13 07:35:29 compute-0 systemd[1]: libpod-79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a.scope: Deactivated successfully.
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.496927357 +0000 UTC m=+0.095319750 container died 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-67a9a22f64cc1d7546d012a37ce5c3f81e779a9d45ea82baf129a73b599ea542-merged.mount: Deactivated successfully.
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.513773352 +0000 UTC m=+0.112165724 container remove 79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kapitsa, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 07:35:29 compute-0 podman[245704]: 2025-12-13 07:35:29.42002651 +0000 UTC m=+0.018418904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:35:29 compute-0 systemd[1]: libpod-conmon-79764660549df75de455581f909107f757ea306672c0645860bc4e60c5452c7a.scope: Deactivated successfully.
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.631223943 +0000 UTC m=+0.028120576 container create 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:35:29 compute-0 systemd[1]: Started libpod-conmon-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope.
Dec 13 07:35:29 compute-0 ceph-mon[74928]: pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.689770577 +0000 UTC m=+0.086667211 container init 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.694530087 +0000 UTC m=+0.091426710 container start 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.696096503 +0000 UTC m=+0.092993136 container attach 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.620254627 +0000 UTC m=+0.017151280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:35:29 compute-0 bold_chaum[245752]: {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:     "0": [
Dec 13 07:35:29 compute-0 bold_chaum[245752]:         {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "devices": [
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "/dev/loop3"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             ],
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_name": "ceph_lv0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_size": "21470642176",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "name": "ceph_lv0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "tags": {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cluster_name": "ceph",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.crush_device_class": "",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.encrypted": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.objectstore": "bluestore",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osd_id": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.type": "block",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.vdo": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.with_tpm": "0"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             },
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "type": "block",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "vg_name": "ceph_vg0"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:         }
Dec 13 07:35:29 compute-0 bold_chaum[245752]:     ],
Dec 13 07:35:29 compute-0 bold_chaum[245752]:     "1": [
Dec 13 07:35:29 compute-0 bold_chaum[245752]:         {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "devices": [
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "/dev/loop4"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             ],
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_name": "ceph_lv1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_size": "21470642176",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "name": "ceph_lv1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "tags": {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cluster_name": "ceph",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.crush_device_class": "",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.encrypted": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.objectstore": "bluestore",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osd_id": "1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.type": "block",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.vdo": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.with_tpm": "0"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             },
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "type": "block",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "vg_name": "ceph_vg1"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:         }
Dec 13 07:35:29 compute-0 bold_chaum[245752]:     ],
Dec 13 07:35:29 compute-0 bold_chaum[245752]:     "2": [
Dec 13 07:35:29 compute-0 bold_chaum[245752]:         {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "devices": [
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "/dev/loop5"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             ],
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_name": "ceph_lv2",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_size": "21470642176",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "name": "ceph_lv2",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "tags": {
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.cluster_name": "ceph",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.crush_device_class": "",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.encrypted": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.objectstore": "bluestore",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osd_id": "2",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.type": "block",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.vdo": "0",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:                 "ceph.with_tpm": "0"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             },
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "type": "block",
Dec 13 07:35:29 compute-0 bold_chaum[245752]:             "vg_name": "ceph_vg2"
Dec 13 07:35:29 compute-0 bold_chaum[245752]:         }
Dec 13 07:35:29 compute-0 bold_chaum[245752]:     ]
Dec 13 07:35:29 compute-0 bold_chaum[245752]: }
Dec 13 07:35:29 compute-0 systemd[1]: libpod-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope: Deactivated successfully.
Dec 13 07:35:29 compute-0 conmon[245752]: conmon 6bc2297ae43a6f4d0400 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope/container/memory.events
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.927248406 +0000 UTC m=+0.324145049 container died 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 07:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d6d007434ea82bbeb271be5c0c7baa835946b9902e34611b4817be40370769-merged.mount: Deactivated successfully.
Dec 13 07:35:29 compute-0 podman[245739]: 2025-12-13 07:35:29.949354452 +0000 UTC m=+0.346251084 container remove 6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:35:29 compute-0 systemd[1]: libpod-conmon-6bc2297ae43a6f4d04007652fc49727f2d2842b70ee513269592634711611d69.scope: Deactivated successfully.
Dec 13 07:35:29 compute-0 sudo[245669]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:30 compute-0 sudo[245772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:35:30 compute-0 sudo[245772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:30 compute-0 sudo[245772]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:30 compute-0 sudo[245797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:35:30 compute-0 sudo[245797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.292995006 +0000 UTC m=+0.028769326 container create 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 07:35:30 compute-0 systemd[1]: Started libpod-conmon-7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22.scope.
Dec 13 07:35:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.341816333 +0000 UTC m=+0.077590653 container init 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.346263265 +0000 UTC m=+0.082037585 container start 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.347424799 +0000 UTC m=+0.083199119 container attach 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:35:30 compute-0 wonderful_dirac[245845]: 167 167
Dec 13 07:35:30 compute-0 systemd[1]: libpod-7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22.scope: Deactivated successfully.
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.349760711 +0000 UTC m=+0.085535031 container died 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 07:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-236175dcb5bb0fccb3b4bf32a381a695496c681b2c9b287be70f55fd31d71ac8-merged.mount: Deactivated successfully.
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.368222816 +0000 UTC m=+0.103997136 container remove 7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:35:30 compute-0 podman[245832]: 2025-12-13 07:35:30.281360088 +0000 UTC m=+0.017134409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:35:30 compute-0 systemd[1]: libpod-conmon-7c59e14666ed9659ec6b55795844c000235f0748b10bcda548a89a16fac69f22.scope: Deactivated successfully.
Dec 13 07:35:30 compute-0 podman[245866]: 2025-12-13 07:35:30.488813791 +0000 UTC m=+0.027051044 container create 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 07:35:30 compute-0 systemd[1]: Started libpod-conmon-3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358.scope.
Dec 13 07:35:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:35:30 compute-0 podman[245866]: 2025-12-13 07:35:30.547185006 +0000 UTC m=+0.085422269 container init 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:35:30 compute-0 podman[245866]: 2025-12-13 07:35:30.553129805 +0000 UTC m=+0.091367057 container start 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 07:35:30 compute-0 podman[245866]: 2025-12-13 07:35:30.55438213 +0000 UTC m=+0.092619383 container attach 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 07:35:30 compute-0 podman[245866]: 2025-12-13 07:35:30.477850938 +0000 UTC m=+0.016088211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:35:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:31 compute-0 lvm[245956]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:35:31 compute-0 lvm[245957]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:35:31 compute-0 lvm[245956]: VG ceph_vg0 finished
Dec 13 07:35:31 compute-0 lvm[245957]: VG ceph_vg1 finished
Dec 13 07:35:31 compute-0 lvm[245960]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:35:31 compute-0 lvm[245960]: VG ceph_vg2 finished
Dec 13 07:35:31 compute-0 intelligent_poincare[245879]: {}
Dec 13 07:35:31 compute-0 systemd[1]: libpod-3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358.scope: Deactivated successfully.
Dec 13 07:35:31 compute-0 podman[245866]: 2025-12-13 07:35:31.19083728 +0000 UTC m=+0.729074533 container died 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a1f255afc718aa5b79821a75b5bb2777997d3c5e690f2a206f4f32846dac2f8-merged.mount: Deactivated successfully.
Dec 13 07:35:31 compute-0 podman[245866]: 2025-12-13 07:35:31.211074973 +0000 UTC m=+0.749312225 container remove 3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:35:31 compute-0 systemd[1]: libpod-conmon-3667e80530682efbf25f70ec132af7e9b0936eaa539f9791f24c7c50da590358.scope: Deactivated successfully.
Dec 13 07:35:31 compute-0 sudo[245797]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:35:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:35:31 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:31 compute-0 sudo[245973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:35:31 compute-0 sudo[245973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:35:31 compute-0 sudo[245973]: pam_unix(sudo:session): session closed for user root
Dec 13 07:35:31 compute-0 ceph-mon[74928]: pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:31 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:31 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:35:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:33 compute-0 ceph-mon[74928]: pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:35 compute-0 ceph-mon[74928]: pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:36 compute-0 podman[245998]: 2025-12-13 07:35:36.711151215 +0000 UTC m=+0.051773010 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 07:35:37 compute-0 ceph-mon[74928]: pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:35:38
Dec 13 07:35:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:35:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:35:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes']
Dec 13 07:35:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:35:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:35:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:35:39 compute-0 ceph-mon[74928]: pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:35:41.645 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:35:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:35:41.645 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:35:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:35:41.646 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:35:41 compute-0 ceph-mon[74928]: pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:42 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:42 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:43 compute-0 ceph-mon[74928]: pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:44 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:45 compute-0 ceph-mon[74928]: pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:46 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:47 compute-0 ceph-mon[74928]: pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:47 compute-0 podman[246021]: 2025-12-13 07:35:47.702955004 +0000 UTC m=+0.039440561 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 07:35:47 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.572044903640365e-07 of space, bias 4.0, pg target 0.0009086453884368437 quantized to 16 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 07:35:48 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:49 compute-0 ceph-mon[74928]: pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:50 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:51 compute-0 ceph-mon[74928]: pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:52 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:52 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:53 compute-0 ceph-mon[74928]: pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:54 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:55 compute-0 podman[246038]: 2025-12-13 07:35:55.692088438 +0000 UTC m=+0.035689447 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 07:35:55 compute-0 ceph-mon[74928]: pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:56 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:57 compute-0 ceph-mon[74928]: pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:57 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:35:58 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:35:59 compute-0 ceph-mon[74928]: pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:00 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:01 compute-0 ceph-mon[74928]: pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:02 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:02 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:03 compute-0 ceph-mon[74928]: pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:04 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:05 compute-0 ceph-mon[74928]: pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:06 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:07 compute-0 podman[246054]: 2025-12-13 07:36:07.709423853 +0000 UTC m=+0.052756949 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller)
Dec 13 07:36:07 compute-0 ceph-mon[74928]: pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:07 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:08 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:08 compute-0 sshd-session[246077]: Accepted publickey for zuul from 192.168.122.10 port 33054 ssh2: ECDSA SHA256:7hwsPrzEGvjNfXCD1S+7z6QhqAHn2HFxvvV5rKQhgY8
Dec 13 07:36:09 compute-0 systemd-logind[745]: New session 54 of user zuul.
Dec 13 07:36:09 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 13 07:36:09 compute-0 sshd-session[246077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 07:36:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:36:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:36:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:36:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:36:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:36:09 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:36:09 compute-0 sudo[246081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 13 07:36:09 compute-0 sudo[246081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 07:36:09 compute-0 ceph-mon[74928]: pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:10 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:11 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14384 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:11 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:11 compute-0 ceph-mon[74928]: pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 07:36:12 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822048027' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 07:36:12 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:12 compute-0 ceph-mon[74928]: from='client.14384 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:12 compute-0 ceph-mon[74928]: from='client.14386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:12 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3822048027' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 07:36:12 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:13 compute-0 ceph-mon[74928]: pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 07:36:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231078060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:36:13 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 07:36:13 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231078060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:36:14 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/231078060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 07:36:14 compute-0 ceph-mon[74928]: from='client.? 192.168.122.10:0/231078060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 07:36:15 compute-0 ceph-mon[74928]: pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:16 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:17 compute-0 ceph-mon[74928]: pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:17 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:18 compute-0 podman[246365]: 2025-12-13 07:36:18.076025249 +0000 UTC m=+0.047653024 container health_status f696b337a701eeb12548640e55e827503e894b0e602e4d8080c3212ba22210e6 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Dec 13 07:36:18 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:18 compute-0 ovs-vsctl[246408]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 13 07:36:19 compute-0 virtqemud[241006]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 13 07:36:19 compute-0 virtqemud[241006]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 13 07:36:19 compute-0 virtqemud[241006]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 07:36:19 compute-0 ceph-mon[74928]: pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:19 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: cache status {prefix=cache status} (starting...)
Dec 13 07:36:19 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: client ls {prefix=client ls} (starting...)
Dec 13 07:36:20 compute-0 lvm[246722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:36:20 compute-0 lvm[246722]: VG ceph_vg0 finished
Dec 13 07:36:20 compute-0 lvm[246726]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:36:20 compute-0 lvm[246726]: VG ceph_vg2 finished
Dec 13 07:36:20 compute-0 lvm[246756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:36:20 compute-0 lvm[246756]: VG ceph_vg1 finished
Dec 13 07:36:20 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:20 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: damage ls {prefix=damage ls} (starting...)
Dec 13 07:36:20 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:20 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump loads {prefix=dump loads} (starting...)
Dec 13 07:36:20 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 13 07:36:20 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14396 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:20 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 13 07:36:20 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623486156' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 07:36:21 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 13 07:36:21 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14400 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:21 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980226997' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:36:21 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 13 07:36:21 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14404 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:21 compute-0 ceph-mgr[75200]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 07:36:21 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: 2025-12-13T07:36:21.550+0000 7facc0ef1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 07:36:21 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: ops {prefix=ops} (starting...)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875113700' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 07:36:21 compute-0 ceph-mon[74928]: from='client.14394 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:21 compute-0 ceph-mon[74928]: pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1623486156' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 07:36:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2980226997' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:36:21 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1875113700' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 07:36:21 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 13 07:36:21 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/724077111' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 13 07:36:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5555665' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: session ls {prefix=session ls} (starting...)
Dec 13 07:36:22 compute-0 ceph-mds[93864]: mds.cephfs.compute-0.zwnyoz asok_command: status {prefix=status} (starting...)
Dec 13 07:36:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 07:36:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994718485' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14414 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:22 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:22 compute-0 ceph-mon[74928]: from='client.14396 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: from='client.14400 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: from='client.14404 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/724077111' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/5555665' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1994718485' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 07:36:22 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3909206045' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 07:36:22 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:22 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 07:36:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113455235' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 13 07:36:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644590945' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 07:36:23 compute-0 nova_compute[241222]: 2025-12-13 07:36:23.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:23 compute-0 nova_compute[241222]: 2025-12-13 07:36:23.586 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:36:23 compute-0 nova_compute[241222]: 2025-12-13 07:36:23.586 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:36:23 compute-0 nova_compute[241222]: 2025-12-13 07:36:23.587 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:36:23 compute-0 nova_compute[241222]: 2025-12-13 07:36:23.587 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 07:36:23 compute-0 nova_compute[241222]: 2025-12-13 07:36:23.587 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:36:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 07:36:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721358860' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 13 07:36:23 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075121361' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: from='client.14414 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3909206045' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/113455235' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2644590945' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3721358860' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:36:23 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1075121361' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 07:36:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:36:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770081155' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.083 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:36:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 07:36:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949782524' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 07:36:24 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14432 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:24 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: 2025-12-13T07:36:24.315+0000 7facc0ef1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 07:36:24 compute-0 ceph-mgr[75200]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.326 241226 WARNING nova.virt.libvirt.driver [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.327 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4989MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.327 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.327 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.381 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.381 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.396 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 07:36:24 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:24 compute-0 ceph-mon[74928]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/770081155' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:36:24 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3949782524' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 07:36:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 07:36:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167973010' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 07:36:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 07:36:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272743291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.853 241226 DEBUG oslo_concurrency.processutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.860 241226 DEBUG nova.compute.provider_tree [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed in ProviderTree for provider: 1d614cf3-e40f-4742-a628-7a61041be9be update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.880 241226 DEBUG nova.scheduler.client.report [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Inventory has not changed for provider 1d614cf3-e40f-4742-a628-7a61041be9be based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.881 241226 DEBUG nova.compute.resource_tracker [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 07:36:24 compute-0 nova_compute[241222]: 2025-12-13 07:36:24.881 241226 DEBUG oslo_concurrency.lockutils [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:36:24 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 13 07:36:24 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242158922' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 07:36:25 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14440 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 13 07:36:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705410132' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 07:36:25 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14444 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.614902 4 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000039 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615050 4 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615306 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000058 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615449 4 0.000009
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615405 4 0.000049
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615553 4 0.000125
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000169 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000191 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000039 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000181 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615401 4 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000063 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615886 4 0.000009
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000243 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000271 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615723 4 0.000116
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000711 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616213 4 0.000009
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616278 4 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616331 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000029 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.615506 4 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000057 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616740 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000024 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616732 4 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616541 4 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000027 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616843 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616687 4 0.000010
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000024 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000048 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616821 4 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616977 4 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000025 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616839 4 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000029 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616846 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000024 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000022 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616833 4 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 peering pruub 100.238204956s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.617906 3 0.000321
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 peering pruub 100.238204956s@ mbc={}] exit Started/Primary/Peering 0.618042 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47 pruub=9.963310242s) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 unknown pruub 100.238204956s@ mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.617012 4 0.000010
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000022 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617059 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617077 4 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617230 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000031 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000045 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000025 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617118 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617334 4 0.000010
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.616109 4 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000025 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Reset 0.617682 4 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000302 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000323 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000022 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002380 3 0.000094
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002227 3 0.000136
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002089 3 0.000231
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000420 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000438 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002176 3 0.000096
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002138 3 0.000092
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001971 3 0.000336
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001696 3 0.000696
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001238 3 0.000060
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001367 3 0.000940
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.12( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001135 3 0.000095
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001127 3 0.000148
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001098 3 0.000910
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001074 3 0.000056
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001020 3 0.000065
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.18( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=39/40 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002085 3 0.000072
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002046 3 0.000066
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002025 3 0.000148
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.5( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002000 3 0.000055
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002250 3 0.000119
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002249 3 0.000059
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.9( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002281 3 0.000053
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.c( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002284 3 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.0( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=39/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 43'65 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002290 3 0.000056
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002292 3 0.000051
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002259 3 0.000064
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.3( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002281 3 0.000075
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002285 3 0.000054
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.14( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001989 3 0.001149
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.1d( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001954 3 0.000359
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.15( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001954 3 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001886 3 0.000473
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=39/39 les/c/f=40/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002039 3 0.002192
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 48 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/39 les/c/f=48/40/0 sis=47) [2] r=0 lpr=47 pi=[39,47)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:19.513499+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:15:49.343790+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:15:49.354374+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 47)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:15:49.343790+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:15:49.354374+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:20.513677+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe140000/0x0/0x4ffc00000, data 0x3987a/0x86000, compress 0x0/0x0/0x0, omap 0x77ba, meta 0x1a28846), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:21.513818+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:22.513945+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.943325996s of 11.972549438s, submitted: 167
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:23.514086+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458189 data_alloc: 218103808 data_used: 0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:24.514223+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:25.514335+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:26.514486+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:15:56.450874+0000 osd.2 (osd.2) 48 : cluster [DBG] 2.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:15:56.461479+0000 osd.2 (osd.2) 49 : cluster [DBG] 2.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136144 7 0.000231
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136141 7 0.000074
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138559 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138272 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138474 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138631 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138504 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138661 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863642693s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896354675s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] exit Reset 0.000103 1 0.000168
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136273 7 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138472 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138530 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138547 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863589287s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896354675s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863616943s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136288 7 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138290 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138363 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Reset 0.000044 1 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138621 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863591194s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136298 7 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863552094s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896423340s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.137687 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138407 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138427 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863612175s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] exit Reset 0.000077 1 0.000123
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863510132s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896423340s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Reset 0.000040 1 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863586426s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.134176 7 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.136242 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.136280 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.136297 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865627289s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898643494s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136454 7 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] exit Reset 0.000021 1 0.000036
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138173 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138454 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138470 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.865616798s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898643494s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863466263s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896522522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Reset 0.000029 1 0.000051
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863451004s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896522522s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136560 7 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137792 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137829 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137846 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136590 7 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137751 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137818 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863352776s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896545410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137836 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] exit Reset 0.000031 1 0.000052
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863326073s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896537781s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863339424s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896545410s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] exit Reset 0.000023 1 0.000044
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863314629s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896537781s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136732 7 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137775 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137816 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135677 7 0.000287
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137785 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137819 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137837 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864120483s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897537231s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] exit Reset 0.000025 1 0.000045
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.864109993s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897537231s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135769 7 0.000839
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137834 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137867 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137913 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863996506s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897552490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] exit Reset 0.000023 1 0.000071
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863986015s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897552490s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135614 7 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.137884 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.137922 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.137937 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863911629s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Reset 0.000022 1 0.000039
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863902092s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.135994 7 0.000851
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138010 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138042 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138244 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862752914s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896553040s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] exit Reset 0.000033 1 0.000472
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862736702s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896553040s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.135837 7 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138111 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138143 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138158 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138301 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863280296s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897605896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Reset 0.000436 1 0.000451
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136239 7 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863158226s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897590637s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] exit Reset 0.000430 1 0.000692
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863239288s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897605896s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863135338s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897590637s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138622 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.138716 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136361 7 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138685 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138716 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138731 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.138746 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862961769s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897644043s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] exit Reset 0.000041 1 0.000276
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.863029480s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897651672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] exit Reset 0.000216 1 0.000249
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862827301s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897651672s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136609 7 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138921 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.138975 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136668 7 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138975 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.139007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.139023 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862700462s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.897689819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.138992 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] exit Reset 0.000036 1 0.000054
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862665176s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.897674561s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862675667s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897689819s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] exit Reset 0.000057 1 0.000175
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] exit Start 0.000022 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862626076s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.897674561s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active+clean] exit Started/Primary/Active/Clean 7.136811 7 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary/Active 7.138790 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started/Primary 7.139122 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] exit Started 7.139138 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 43'66 mlcod 43'66 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862836838s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 active pruub 106.898017883s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] exit Reset 0.000034 1 0.000054
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862815857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.898017883s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.136869 7 0.000031
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.138783 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.139232 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.139247 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862802505s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898086548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] exit Reset 0.000027 1 0.000044
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862786293s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898086548s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.137017 7 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.139117 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.139148 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.139180 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=47) [2] r=0 lpr=47 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862488747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.898033142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] exit Reset 0.000028 1 0.000178
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862475395s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.898033142s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] exit Start 0.000976 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.862939835s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY pruub 106.897644043s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860626221s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 active pruub 106.896415710s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Reset 0.003190 1 0.003243
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] exit Start 0.000038 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.860574722s) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY pruub 106.896415710s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000093 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000093 1 0.000035
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000492 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000007
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000008
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000018 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000033
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000042 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000196
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000067 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000035
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000058 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000164
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010017 2 0.000049
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000039 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010917 2 0.000030
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000036
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004738 2 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004609 2 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004486 2 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004312 2 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004208 2 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004200 2 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004107 2 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003989 2 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003900 2 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004089 2 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004004 2 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003912 2 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003844 2 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003464 2 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003047 2 0.000020
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002905 2 0.000050
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003714 2 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003881 2 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003139 2 0.000050
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002807 2 0.000047
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002160 2 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 51 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fe13c000/0x0/0x4ffc00000, data 0x3ce7d/0x8c000, compress 0x0/0x0/0x0, omap 0x7cac, meta 0x1a28354), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 49)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:15:56.450874+0000 osd.2 (osd.2) 48 : cluster [DBG] 2.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:15:56.461479+0000 osd.2 (osd.2) 49 : cluster [DBG] 2.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 51 handle_osd_map epochs [51,52], i have 52, src has [1,52]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890346 2 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.893868 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.891091 2 0.000097
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.890336 2 0.000160
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.894119 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895489 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892869 2 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895848 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892939 2 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896064 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892853 2 0.000213
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896797 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893092 2 0.000044
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.896990 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893235 2 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897202 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889842 2 0.000168
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.895873 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893963 2 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897907 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.893474 2 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.897821 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.892527 2 0.000070
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.894756 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894370 2 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.898623 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894798 2 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899337 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894593 2 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898757 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894921 2 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.899473 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895229 2 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.899897 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.894866 2 0.000010
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898923 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895431 2 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900689 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895847 2 0.000610
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.906564 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.913313 7 0.000043
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.910568 7 0.001039
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.895820 2 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.907603 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002557 3 0.002320
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002380 3 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.911622 7 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002374 3 0.000134
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005004 3 0.000508
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002370 3 0.000224
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002325 3 0.000077
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002669 3 0.000044
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002530 3 0.002826
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000090 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002605 3 0.003081
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002607 3 0.000299
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002520 3 0.000058
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002734 3 0.000423
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.889786 2 0.000044
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.898576 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.896561 2 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.900612 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.914437 7 0.000117
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003576 3 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003692 3 0.000195
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003168 3 0.000098
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003231 3 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003437 3 0.000047
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000193 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003301 3 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003696 3 0.000037
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003042 3 0.000263
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002522 3 0.000941
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004185 3 0.000070
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004314 3 0.005992
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.919320 7 0.000227
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] exit Started/Stray 0.917270 7 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 43'66 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.921184 7 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920163 7 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920433 7 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.920317 7 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.919186 7 0.000045
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918495 7 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.918759 7 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922683 7 0.000031
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923848 7 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923879 7 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.923686 7 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924108 7 0.000036
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922692 7 0.000033
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.922096 7 0.000055
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924231 7 0.000370
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.921267 7 0.000118
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.041322 2 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.041340 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:27.514624+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:15:57.401148+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.0 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:15:57.411765+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.0 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.119749 2 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.119771 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.159613 3 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.159787 3 0.000069
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 43'2 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.167212 2 0.000020
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.167240 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069833 1 0.000043
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224708 1 0.000033
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224740 1 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000068 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224820 1 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224872 1 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224903 1 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224950 1 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.224978 1 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.223357 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222118 1 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222119 1 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222149 1 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222223 1 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222237 1 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222310 1 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222336 1 0.000148
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.222355 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.191997 1 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.107618 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.066187 1 0.000083
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007515 1 0.000079
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.232261 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.153481 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014831 1 0.000084
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.239601 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.159785 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022343 1 0.000039
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.247196 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.167651 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.249540 2 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.249564 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000044 1 0.000085
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030255 1 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.255151 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.175491 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036852 1 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.261792 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.181015 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044307 1 0.000062
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269284 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.187801 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051557 1 0.000097
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.276570 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.195367 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058926 1 0.000057
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.282317 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.205087 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066271 1 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.288429 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.212318 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073647 1 0.000071
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.295790 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.219688 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081025 1 0.000085
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.303208 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.226914 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088271 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.310532 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.234678 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095693 1 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.317989 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.240722 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103029 1 0.000053
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.325374 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247521 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110431 1 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.332798 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.257185 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117742 1 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.340122 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.261495 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.154745 2 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.346777 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.299718 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.162080 2 0.000103
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.269733 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.308866 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.169736 2 0.000156
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.235980 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.316589 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.154755 2 0.000132
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.154857 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.321735 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.590672 2 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.590692 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000060 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.008114 2 0.000119
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.008205 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.510543 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.692428 2 0.000020
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ReplicaActive 0.692445 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000036
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete/Deleting 0.008140 2 0.000102
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started/ToDelete 0.008245 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] lb MIN local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 pct=0'0 crt=48'67 lcod 43'66 active mbc={}] exit Started 1.615159 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 51)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:15:57.401148+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.0 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:15:57.411765+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.0 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:28.514774+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 2056192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 466944 data_alloc: 218103808 data_used: 0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe134000/0x0/0x4ffc00000, data 0x41159/0x94000, compress 0x0/0x0/0x0, omap 0x81d4, meta 0x1a27e2c), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:29.514872+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 1998848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 53 handle_osd_map epochs [54,55], i have 53, src has [1,55]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:30.514976+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:00.388928+0000 osd.2 (osd.2) 52 : cluster [DBG] 3.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:00.399468+0000 osd.2 (osd.2) 53 : cluster [DBG] 3.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 1990656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x46360/0x9d000, compress 0x0/0x0/0x0, omap 0x8704, meta 0x1a278fc), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 53)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:00.388928+0000 osd.2 (osd.2) 52 : cluster [DBG] 3.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:00.399468+0000 osd.2 (osd.2) 53 : cluster [DBG] 3.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:31.515169+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:01.389558+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:01.400088+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 56 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x46360/0x9d000, compress 0x0/0x0/0x0, omap 0x8704, meta 0x1a278fc), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 55)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:01.389558+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:01.400088+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:32.515378+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:02.352431+0000 osd.2 (osd.2) 56 : cluster [DBG] 7.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:02.363037+0000 osd.2 (osd.2) 57 : cluster [DBG] 7.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 57)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:02.352431+0000 osd.2 (osd.2) 56 : cluster [DBG] 7.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:02.363037+0000 osd.2 (osd.2) 57 : cluster [DBG] 7.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:33.515644+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 480563 data_alloc: 218103808 data_used: 0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.125689507s of 11.183286667s, submitted: 233
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:34.515746+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 57 heartbeat osd_stat(store_statfs(0x4fe12c000/0x0/0x4ffc00000, data 0x47daf/0xa0000, compress 0x0/0x0/0x0, omap 0x896a, meta 0x1a27696), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:35.515847+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:36.515960+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:37.516070+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 57 handle_osd_map epochs [58,59], i have 57, src has [1,59]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:38.516173+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490751 data_alloc: 218103808 data_used: 848
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe11f000/0x0/0x4ffc00000, data 0x4d083/0xa9000, compress 0x0/0x0/0x0, omap 0x8ea4, meta 0x1a2715c), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:39.516267+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:40.516372+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:41.516494+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 59 handle_osd_map epochs [60,61], i have 59, src has [1,61]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000321 1 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000367 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000155 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=0 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000151 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000901 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000970 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:42.516612+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.202425 2 0.000922
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.203433 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.203461 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 62 handle_osd_map epochs [61,62], i have 62, src has [1,62]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000168 1 0.000240
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.203563 2 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.203790 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.203962 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.204592 2 0.000058
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.204989 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.205012 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.204354 2 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.204533 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000054 1 0.000109
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.204550 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000055 1 0.000272
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=0 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000097 1 0.001025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 62 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1064960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:43.516721+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:13.387265+0000 osd.2 (osd.2) 58 : cluster [DBG] 3.16 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:13.397771+0000 osd.2 (osd.2) 59 : cluster [DBG] 3.16 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.002253 6 0.000064
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.001620 6 0.000031
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.002435 6 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.003656 6 0.000130
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001608 3 0.000092
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000279 1 0.000215
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028506 1 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.030253 3 0.000194
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000091
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 lc 42'87 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 521577 data_alloc: 218103808 data_used: 848
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045531 1 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.076312 3 0.000126
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000062 1 0.000072
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 lc 42'47 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059847 1 0.000062
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.135387 3 0.000352
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000051 1 0.000052
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045539 1 0.000060
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 59)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:13.387265+0000 osd.2 (osd.2) 58 : cluster [DBG] 3.16 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:13.397771+0000 osd.2 (osd.2) 59 : cluster [DBG] 3.16 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:44.516859+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.221976280s of 10.248309135s, submitted: 47
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 63 handle_osd_map epochs [60,64], i have 64, src has [1,64]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000277 1 0.000212
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000320 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000106 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000138 1 0.000271
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.981002 1 0.000030
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.011472 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.013748 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000432 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=0 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=0 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.935430 1 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.011307 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.013776 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000090 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=0 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000356 1 0.000376
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000156 1 0.000168
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000205 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000263 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000139
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000156 1 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000188 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.829932 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.011206 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.014959 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000045
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 1 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.875715 1 0.000043
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.012145 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.013787 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[47,62)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000053 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000074
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001168 3 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001154 3 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001163 3 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=26
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=26
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000991 3 0.000087
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x540c3/0xbb000, compress 0x0/0x0/0x0, omap 0x9616, meta 0x1a269ea), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:45.516969+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991224 2 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992342 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991481 2 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992688 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.992797 2 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.993005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.993023 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000060 1 0.000090
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992015 2 0.000036
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993251 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.993369 2 0.000224
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.993666 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.993796 2 0.000056
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.993679 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.994241 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.994259 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000089 1 0.000126
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.994522 2 0.000051
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.994852 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.994877 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000368 1 0.000384
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000417 1 0.000440
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992916 2 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994239 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001570 3 0.000118
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001491 3 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001140 3 0.000082
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=64/65 n=7 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002090 3 0.000144
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/47 les/c/f=65/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fe107000/0x0/0x4ffc00000, data 0x577d2/0xc1000, compress 0x0/0x0/0x0, omap 0x9b60, meta 0x1a264a0), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:46.517081+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004936 6 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.005246 6 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006132 6 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.005288 6 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001874 3 0.000256
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000048
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.051459 3 0.000400
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049819 1 0.000015
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000124 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059718 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.111224 3 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000051
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038649 1 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.149904 3 0.000361
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000047 1 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031595 1 0.000022
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:47.517200+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:17.373990+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:17.391644+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe0ff000/0x0/0x4ffc00000, data 0x59866/0xcd000, compress 0x0/0x0/0x0, omap 0x9dcb, meta 0x1a26235), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=0 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000300 1 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000337 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 66 handle_osd_map epochs [66,67], i have 67, src has [1,67]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.095232 2 0.000169
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.095603 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.095628 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000038 1 0.000085
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.095846 2 0.000048
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.096063 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.096078 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000080 1 0.000100
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000027 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.839957 1 0.000056
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.989964 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.996114 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000053
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.938852 1 0.000119
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.990699 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.995676 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.879275 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.990657 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.996137 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000108
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.808801 1 0.000019
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.990405 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.995720 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000036 1 0.000049
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001274 2 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001181 2 0.000020
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000951 2 0.000035
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000866 2 0.000050
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000701 2 0.000086
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000593 2 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000554 2 0.000018
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000475 2 0.000342
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 61)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:17.373990+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:17.391644+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1384448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:48.517581+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.001797 5 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.001599 5 0.000050
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999528 2 0.000031
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001554 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999595 2 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001405 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999561 2 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001247 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999739 2 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001273 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001827 4 0.000080
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000052 1 0.000049
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 lc 42'36 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003868 3 0.000136
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000043 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003900 3 0.000308
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003809 3 0.000153
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003839 3 0.000036
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000040 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/55 les/c/f=68/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/54 les/c/f=68/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042691 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.044737 4 0.000085
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000067
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 lc 42'51 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1269760 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 623972 data_alloc: 218103808 data_used: 848
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052613 1 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:49.517720+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.976872 1 0.000021
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.021516 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.924007 1 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.023334 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.021514 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.023156 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[47,67)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000066
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000060
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000024
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000259
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001029 3 0.000039
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000797 3 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 1179648 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fe0f1000/0x0/0x4ffc00000, data 0x5d163/0xd7000, compress 0x0/0x0/0x0, omap 0xa2e0, meta 0x1a25d20), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:50.517832+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999330 2 0.000038
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000237 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999439 2 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000524 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001220 3 0.000081
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001162 3 0.000098
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=69/70 n=7 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=69/70 n=6 ec=47/37 lis/c=69/47 les/c/f=70/48/0 sis=69) [2] r=0 lpr=69 pi=[47,69)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1122304 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:51.517947+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1114112 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe0ed000/0x0/0x4ffc00000, data 0x6062b/0xdd000, compress 0x0/0x0/0x0, omap 0xa7f9, meta 0x1a25807), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:52.518079+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe0ed000/0x0/0x4ffc00000, data 0x6062b/0xdd000, compress 0x0/0x0/0x0, omap 0xa7f9, meta 0x1a25807), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1097728 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:53.518218+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:23.389852+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:23.400423+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 63)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:23.389852+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:23.400423+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1089536 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637611 data_alloc: 218103808 data_used: 1100
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:54.518374+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:24.357676+0000 osd.2 (osd.2) 64 : cluster [DBG] 6.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:24.368285+0000 osd.2 (osd.2) 65 : cluster [DBG] 6.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.967880249s of 10.031354904s, submitted: 124
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 65)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:24.357676+0000 osd.2 (osd.2) 64 : cluster [DBG] 6.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:24.368285+0000 osd.2 (osd.2) 65 : cluster [DBG] 6.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1064960 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:55.518546+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1056768 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:56.518701+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:26.321089+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:26.331635+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 67)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:26.321089+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:26.331635+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1056768 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x63d63/0xe3000, compress 0x0/0x0/0x0, omap 0xad16, meta 0x1a252ea), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:57.518869+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:58.519015+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:28.370836+0000 osd.2 (osd.2) 68 : cluster [DBG] 7.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:28.381549+0000 osd.2 (osd.2) 69 : cluster [DBG] 7.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 69)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:28.370836+0000 osd.2 (osd.2) 68 : cluster [DBG] 7.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:28.381549+0000 osd.2 (osd.2) 69 : cluster [DBG] 7.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654478 data_alloc: 218103808 data_used: 1685
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000087 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000058 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000151
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000374 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=0 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000075 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:59.519215+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x658ff/0xe6000, compress 0x0/0x0/0x0, omap 0xaf84, meta 0x1a2507c), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 73 handle_osd_map epochs [73,74], i have 74, src has [1,74]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.886183 2 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.886287 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.886303 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000064 1 0.000104
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.886771 2 0.000312
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.887203 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.887311 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000144 1 0.000315
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000101 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:00.519342+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:30.387383+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:30.405023+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 71)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:30.387383+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:30.405023+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x673b2/0xe9000, compress 0x0/0x0/0x0, omap 0xb237, meta 0x1a24dc9), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.038255 5 0.000482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.039362 5 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002599 4 0.000069
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000093 1 0.000041
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 lc 42'66 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035823 1 0.000082
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.038596 4 0.000081
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000035 1 0.000043
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066842 1 0.000083
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:01.519512+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.933198 1 0.000026
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.971775 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.010478 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000068
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000025
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.866150 1 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.972427 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.011876 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[47,74)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000113 1 0.000951
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000044 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001068 3 0.000030
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000278
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000524 3 0.000055
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 819200 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:02.519641+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 76 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003146 2 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002540 2 0.000050
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004280 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003293 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000848 4 0.000067
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001128 4 0.000189
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=7 ec=47/37 lis/c=76/47 les/c/f=77/48/0 sis=76) [2] r=0 lpr=76 pi=[47,76)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:03.519768+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fe0d5000/0x0/0x4ffc00000, data 0x6c5f2/0xf5000, compress 0x0/0x0/0x0, omap 0xb9cc, meta 0x1a24634), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694860 data_alloc: 218103808 data_used: 2278
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:04.519906+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.017603874s of 10.048166275s, submitted: 74
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:05.520040+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:06.520162+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:07.520277+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:37.457652+0000 osd.2 (osd.2) 72 : cluster [DBG] 6.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:37.475321+0000 osd.2 (osd.2) 73 : cluster [DBG] 6.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 73)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:37.457652+0000 osd.2 (osd.2) 72 : cluster [DBG] 6.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:37.475321+0000 osd.2 (osd.2) 73 : cluster [DBG] 6.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 704512 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:08.520421+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 663552 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 699325 data_alloc: 218103808 data_used: 2278
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 78 handle_osd_map epochs [80,80], i have 78, src has [1,80]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 78 handle_osd_map epochs [79,80], i have 78, src has [1,80]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:09.520557+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:39.505901+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:39.516511+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0cc000/0x0/0x4ffc00000, data 0x718c6/0xfe000, compress 0x0/0x0/0x0, omap 0xbef6, meta 0x1a2410a), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 75)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:39.505901+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:39.516511+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 630784 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:10.520717+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 630784 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:11.520855+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 598016 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:12.520995+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 581632 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 80 handle_osd_map epochs [81,82], i have 80, src has [1,82]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:13.521102+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 3 last_log 78 sent 75 num 3 unsent 3 sending 3
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:42.552916+0000 osd.2 (osd.2) 76 : cluster [DBG] 6.14 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:42.577604+0000 osd.2 (osd.2) 77 : cluster [DBG] 6.14 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:43.515892+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 507904 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718930 data_alloc: 218103808 data_used: 2863
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 78)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:42.552916+0000 osd.2 (osd.2) 76 : cluster [DBG] 6.14 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:42.577604+0000 osd.2 (osd.2) 77 : cluster [DBG] 6.14 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:43.515892+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:14.521266+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 79 sent 78 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:43.526492+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 499712 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 79)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:43.526492+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.031378746s of 10.040954590s, submitted: 12
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fe0c6000/0x0/0x4ffc00000, data 0x74ffe/0x104000, compress 0x0/0x0/0x0, omap 0xc167, meta 0x1a23e99), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=0 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000106 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=0 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000069 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000163
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000181 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 83 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:15.521455+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 491520 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.925813 2 0.000097
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.926052 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.926197 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000158 1 0.000251
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:16.521596+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:45.529781+0000 osd.2 (osd.2) 80 : cluster [DBG] 3.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:45.540379+0000 osd.2 (osd.2) 81 : cluster [DBG] 3.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 483328 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 0.999723 6 0.000118
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002342 3 0.000090
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000040
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 81)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:45.529781+0000 osd.2 (osd.2) 80 : cluster [DBG] 3.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:45.540379+0000 osd.2 (osd.2) 81 : cluster [DBG] 3.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042634 1 0.000079
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:17.521737+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.821190 1 0.000035
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.866330 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.866144 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[54,84)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000064 1 0.000090
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004590 2 0.000029
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000234 2 0.000051
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1310720 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:18.521867+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001079 2 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005960 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000887 4 0.000118
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=86/87 n=6 ec=47/37 lis/c=86/54 les/c/f=87/55/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1269760 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746892 data_alloc: 218103808 data_used: 2863
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:19.522172+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 1245184 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:20.522303+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fcf0c000/0x0/0x4ffc00000, data 0x7f367/0x118000, compress 0x0/0x0/0x0, omap 0xd0af, meta 0x2bc2f51), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1171456 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:21.522399+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 82 sent 81 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:51.515156+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 82)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:51.515156+0000 osd.2 (osd.2) 82 : cluster [DBG] 7.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1163264 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:22.522564+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 3 last_log 85 sent 82 num 3 unsent 3 sending 3
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:51.525022+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:52.485892+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:52.496472+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 85)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:51.525022+0000 osd.2 (osd.2) 83 : cluster [DBG] 7.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:52.485892+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:52.496472+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1163264 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 89 handle_osd_map epochs [90,91], i have 89, src has [1,91]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:23.522719+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1155072 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 762276 data_alloc: 218103808 data_used: 3392
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:24.522860+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 39.029935 78 0.000192
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 39.032118 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 40.026369 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started 40.026389 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970234871s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 active pruub 165.223297119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] exit Reset 0.000104 1 0.000126
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 92 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=8.970201492s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY pruub 165.223297119s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1122304 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.538240433s of 10.568582535s, submitted: 59
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fcf04000/0x0/0x4ffc00000, data 0x85e1e/0x124000, compress 0x0/0x0/0x0, omap 0xd85c, meta 0x2bc27a4), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:25.522966+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:55.401148+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:55.411724+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.008826 3 0.000054
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.008953 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000128 1 0.000283
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000047 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 87)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:55.401148+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:55.411724+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001857 2 0.000141
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 93 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1089536 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:26.523117+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005813 3 0.000171
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007861 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002699 5 0.000162
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000067 1 0.000066
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000757 1 0.000013
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028361 2 0.000042
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 1040384 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:27.523249+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.920479 1 0.000068
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.952535 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.960432 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.960527 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.050017357s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 active pruub 174.272872925s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] exit Reset 0.000187 1 0.000262
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] exit Start 0.000103 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=15.049875259s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY pruub 174.272872925s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 1040384 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:28.523394+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:58.360107+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:58.370350+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.002731 7 0.000216
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000063 1 0.000057
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 DELETING pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030581 2 0.000152
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030696 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.033587 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 89)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:58.360107+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:58.370350+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 991232 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773705 data_alloc: 218103808 data_used: 3392
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:29.523631+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:59.404837+0000 osd.2 (osd.2) 90 : cluster [DBG] 4.1 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:16:59.415542+0000 osd.2 (osd.2) 91 : cluster [DBG] 4.1 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 91)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:59.404837+0000 osd.2 (osd.2) 90 : cluster [DBG] 4.1 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:16:59.415542+0000 osd.2 (osd.2) 91 : cluster [DBG] 4.1 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 983040 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcef7000/0x0/0x4ffc00000, data 0x8c9f6/0x12f000, compress 0x0/0x0/0x0, omap 0xe284, meta 0x2bc1d7c), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:30.523804+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:00.367115+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.1 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:00.377706+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.1 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 93)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:00.367115+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.1 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:00.377706+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.1 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 96 handle_osd_map epochs [96,97], i have 97, src has [1,97]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19(unlocked)] enter Initial
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=0 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=0 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000163 1 0.000034
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000194 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 97 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 983040 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fcefd000/0x0/0x4ffc00000, data 0x8c9f6/0x12f000, compress 0x0/0x0/0x0, omap 0xe284, meta 0x2bc1d7c), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:31.523965+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001283 2 0.000039
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001523 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001555 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=0 lpr=97 pi=[54,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000681 1 0.000754
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 98 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 974848 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:32.524107+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 974848 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.614506 5 0.000052
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002395 4 0.000164
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000079 1 0.000035
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.067183 1 0.000061
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:33.524239+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:03.396069+0000 osd.2 (osd.2) 94 : cluster [DBG] 4.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:03.409822+0000 osd.2 (osd.2) 95 : cluster [DBG] 4.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 95)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:03.396069+0000 osd.2 (osd.2) 94 : cluster [DBG] 4.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:03.409822+0000 osd.2 (osd.2) 95 : cluster [DBG] 4.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.330351 1 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.400144 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.014701 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] r=-1 lpr=98 pi=[54,98)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000167 1 0.000247
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000080 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000178
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Dec 13 07:36:25 compute-0 ceph-osd[87155]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000536 3 0.000053
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000030 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 917504 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807639 data_alloc: 218103808 data_used: 3392
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:34.524394+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:04.445158+0000 osd.2 (osd.2) 96 : cluster [DBG] 4.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:04.455952+0000 osd.2 (osd.2) 97 : cluster [DBG] 4.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000152 2 0.000119
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 101 handle_osd_map epochs [100,101], i have 101, src has [1,101]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000882 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 97)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:04.445158+0000 osd.2 (osd.2) 96 : cluster [DBG] 4.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:04.455952+0000 osd.2 (osd.2) 97 : cluster [DBG] 4.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000877 3 0.000286
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=100/101 n=6 ec=47/37 lis/c=100/54 les/c/f=101/55/0 sis=100) [2] r=0 lpr=100 pi=[54,100)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 884736 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:35.524572+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 884736 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x95223/0x141000, compress 0x0/0x0/0x0, omap 0xef2a, meta 0x2bc10d6), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:36.524704+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 884736 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:37.524862+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.109385490s of 12.143373489s, submitted: 73
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x95223/0x141000, compress 0x0/0x0/0x0, omap 0xef2a, meta 0x2bc10d6), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 876544 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:38.524996+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:07.544507+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.7 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:07.555192+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.7 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 876544 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813619 data_alloc: 218103808 data_used: 3392
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 99)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:07.544507+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.7 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:07.555192+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.7 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:39.525153+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 868352 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:40.525330+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 868352 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:41.525427+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 100 sent 99 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:11.523478+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 868352 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 100)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:11.523478+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:42.525750+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 3 last_log 103 sent 100 num 3 unsent 3 sending 3
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:11.533774+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:12.477697+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:12.488283+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 69394432 unmapped: 851968 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 103)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:11.533774+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:12.477697+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:12.488283+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:43.526015+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:13.463082+0000 osd.2 (osd.2) 104 : cluster [DBG] 6.f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:13.477354+0000 osd.2 (osd.2) 105 : cluster [DBG] 6.f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fcee8000/0x0/0x4ffc00000, data 0x96dbf/0x144000, compress 0x0/0x0/0x0, omap 0xf1fe, meta 0x2bc0e02), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.825406 75 0.000162
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.826300 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 41.829615 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] exit Started 41.829702 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175204277s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 active pruub 190.297714233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] exit Reset 0.000058 1 0.000103
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 103 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103 pruub=15.175173759s) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY pruub 190.297714233s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 843776 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827120 data_alloc: 218103808 data_used: 3392
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 105)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:13.463082+0000 osd.2 (osd.2) 104 : cluster [DBG] 6.f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:13.477354+0000 osd.2 (osd.2) 105 : cluster [DBG] 6.f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 0.698228 3 0.000032
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 0.698261 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=-1 lpr=103 pi=[76,103)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000063 1 0.000090
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000030
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 104 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:44.526173+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:14.482578+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:14.493059+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 843776 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 107)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:14.482578+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:14.493059+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010217 4 0.000047
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.010297 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=76/77 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:45.526315+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 811008 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.646257 5 0.000585
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000053 1 0.000043
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000314 1 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063613 2 0.000046
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.294860 1 0.000052
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005319 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.015641 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.015668 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[76,104)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.640290260s) [0] async=[0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 active pruub 193.476989746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 60.615667 123 0.000221
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 60.617189 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 61.609887 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] exit Started 61.609904 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385339737s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 active pruub 189.222137451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] exit Reset 0.000036 1 0.000057
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106 pruub=11.385320663s) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY pruub 189.222137451s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] exit Reset 0.000805 1 0.000878
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] exit Start 0.000096 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106 pruub=15.639621735s) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY pruub 193.476989746s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:46.526452+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 786432 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0x9db52/0x150000, compress 0x0/0x0/0x0, omap 0xfbda, meta 0x2bc0426), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.005952 3 0.000023
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.005987 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=-1 lpr=106 pi=[64,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000048 1 0.000077
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000226 1 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.006074 7 0.000289
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000038 1 0.000039
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 DELETING pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067693 2 0.000120
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.067764 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=-1 lpr=106 pi=[76,106)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.074028 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:47.526567+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:17.433316+0000 osd.2 (osd.2) 108 : cluster [DBG] 3.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:17.443908+0000 osd.2 (osd.2) 109 : cluster [DBG] 3.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 1753088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.767161369s of 10.787987709s, submitted: 34
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999221 4 0.000079
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999535 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=64/65 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [107,108], i have 108, src has [1,108]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 109)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:17.433316+0000 osd.2 (osd.2) 108 : cluster [DBG] 3.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:17.443908+0000 osd.2 (osd.2) 109 : cluster [DBG] 3.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:48.526751+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.571800 5 0.000232
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000069 1 0.000069
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 1728512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830759 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000510 1 0.000064
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042439 2 0.000033
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.396310 1 0.000068
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011356 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.010920 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.010947 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[64,107)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559910774s) [0] async=[0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 active pruub 196.413848877s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] exit Reset 0.000193 1 0.000283
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] exit Start 0.000068 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109 pruub=15.559823990s) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY pruub 196.413848877s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:49.526880+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:19.448689+0000 osd.2 (osd.2) 110 : cluster [DBG] 4.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:19.459340+0000 osd.2 (osd.2) 111 : cluster [DBG] 4.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 1703936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 111)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:19.448689+0000 osd.2 (osd.2) 110 : cluster [DBG] 4.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:19.459340+0000 osd.2 (osd.2) 111 : cluster [DBG] 4.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.035395 6 0.000352
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000863 2 0.000081
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 DELETING pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045392 2 0.000153
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046324 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=-1 lpr=109 pi=[64,109)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.081924 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:50.527047+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:20.458355+0000 osd.2 (osd.2) 112 : cluster [DBG] 4.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:20.468881+0000 osd.2 (osd.2) 113 : cluster [DBG] 4.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 113)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:20.458355+0000 osd.2 (osd.2) 112 : cluster [DBG] 4.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:20.468881+0000 osd.2 (osd.2) 113 : cluster [DBG] 4.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:51.527248+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa42b4/0x158000, compress 0x0/0x0/0x0, omap 0x1061c, meta 0x2bbf9e4), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:52.527395+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:22.413384+0000 osd.2 (osd.2) 114 : cluster [DBG] 4.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:22.423927+0000 osd.2 (osd.2) 115 : cluster [DBG] 4.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 115)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:22.413384+0000 osd.2 (osd.2) 114 : cluster [DBG] 4.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:22.423927+0000 osd.2 (osd.2) 115 : cluster [DBG] 4.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa42b4/0x158000, compress 0x0/0x0/0x0, omap 0x1061c, meta 0x2bbf9e4), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:53.527555+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832198 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa42b4/0x158000, compress 0x0/0x0/0x0, omap 0x1061c, meta 0x2bbf9e4), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 65.681385 130 0.000669
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 65.685412 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 66.687219 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] exit Started 66.687451 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319065094s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 active pruub 200.222534180s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] exit Reset 0.000124 1 0.000655
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 111 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111 pruub=14.319033623s) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY pruub 200.222534180s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:54.527712+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 1613824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcecf000/0x0/0x4ffc00000, data 0xa5e50/0x15b000, compress 0x0/0x0/0x0, omap 0x10894, meta 0x2bbf76c), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.008059 3 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.008091 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=-1 lpr=111 pi=[67,111)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000067
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000028
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 112 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:55.527825+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 1605632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 113 handle_osd_map epochs [112,113], i have 113, src has [1,113]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998964 4 0.000043
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999052 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:56.527923+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:26.499540+0000 osd.2 (osd.2) 116 : cluster [DBG] 3.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:26.509828+0000 osd.2 (osd.2) 117 : cluster [DBG] 3.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.346602 5 0.000253
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000047 1 0.000027
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000256 1 0.000016
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039714 2 0.000047
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 1597440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 117)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:26.499540+0000 osd.2 (osd.2) 116 : cluster [DBG] 3.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:26.509828+0000 osd.2 (osd.2) 117 : cluster [DBG] 3.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.622825 1 0.000089
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009670 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.008757 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.008786 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[67,112)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336898804s) [1] async=[1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 active pruub 204.257400513s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] exit Reset 0.000146 1 0.000224
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] enter Started
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] enter Start
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] exit Start 0.000046 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114 pruub=15.336791992s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY pruub 204.257400513s@ mbc={}] enter Started/Stray
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:57.528056+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.058753967s of 10.078710556s, submitted: 38
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.004258 7 0.000147
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000055 1 0.000082
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 DELETING pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037845 2 0.000145
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.037941 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=-1 lpr=114 pi=[67,114)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.042309 0 0.000000
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:58.528188+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840090 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:59.528292+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _renew_subs
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:00.528404+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:29.565029+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:29.575626+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 119)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:29.565029+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:29.575626+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:01.528562+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:02.528699+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 1630208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:03.528813+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:32.591864+0000 osd.2 (osd.2) 120 : cluster [DBG] 6.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:32.602461+0000 osd.2 (osd.2) 121 : cluster [DBG] 6.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 1695744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843266 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 121)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:32.591864+0000 osd.2 (osd.2) 120 : cluster [DBG] 6.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:32.602461+0000 osd.2 (osd.2) 121 : cluster [DBG] 6.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:04.528949+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 122 sent 121 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:34.524821+0000 osd.2 (osd.2) 122 : cluster [DBG] 10.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 1695744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 122)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:34.524821+0000 osd.2 (osd.2) 122 : cluster [DBG] 10.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:05.529112+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 123 sent 122 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:34.535376+0000 osd.2 (osd.2) 123 : cluster [DBG] 10.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 1687552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 123)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:34.535376+0000 osd.2 (osd.2) 123 : cluster [DBG] 10.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:06.529261+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 1687552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:07.529404+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:36.581398+0000 osd.2 (osd.2) 124 : cluster [DBG] 10.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:36.592147+0000 osd.2 (osd.2) 125 : cluster [DBG] 10.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 125)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:36.581398+0000 osd.2 (osd.2) 124 : cluster [DBG] 10.a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:36.592147+0000 osd.2 (osd.2) 125 : cluster [DBG] 10.a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.099387169s of 10.109117508s, submitted: 13
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:08.529561+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 126 sent 125 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:38.520378+0000 osd.2 (osd.2) 126 : cluster [DBG] 10.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850509 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 126)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:38.520378+0000 osd.2 (osd.2) 126 : cluster [DBG] 10.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:09.529701+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 1 last_log 127 sent 126 num 1 unsent 1 sending 1
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:38.534338+0000 osd.2 (osd.2) 127 : cluster [DBG] 10.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 1679360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 127)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:38.534338+0000 osd.2 (osd.2) 127 : cluster [DBG] 10.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:10.529863+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:39.553346+0000 osd.2 (osd.2) 128 : cluster [DBG] 10.1d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:39.564058+0000 osd.2 (osd.2) 129 : cluster [DBG] 10.1d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 1662976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 129)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:39.553346+0000 osd.2 (osd.2) 128 : cluster [DBG] 10.1d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:39.564058+0000 osd.2 (osd.2) 129 : cluster [DBG] 10.1d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:11.530035+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 1662976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:12.530185+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 1662976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:13.530323+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:42.572234+0000 osd.2 (osd.2) 130 : cluster [DBG] 10.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:42.582819+0000 osd.2 (osd.2) 131 : cluster [DBG] 10.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857754 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 131)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:42.572234+0000 osd.2 (osd.2) 130 : cluster [DBG] 10.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:42.582819+0000 osd.2 (osd.2) 131 : cluster [DBG] 10.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:14.530473+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:43.565768+0000 osd.2 (osd.2) 132 : cluster [DBG] 10.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:43.576373+0000 osd.2 (osd.2) 133 : cluster [DBG] 10.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 1646592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 133)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:43.565768+0000 osd.2 (osd.2) 132 : cluster [DBG] 10.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:43.576373+0000 osd.2 (osd.2) 133 : cluster [DBG] 10.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:15.530593+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:16.530708+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:45.540212+0000 osd.2 (osd.2) 134 : cluster [DBG] 10.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:45.550933+0000 osd.2 (osd.2) 135 : cluster [DBG] 10.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 1638400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 135)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:45.540212+0000 osd.2 (osd.2) 134 : cluster [DBG] 10.5 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:45.550933+0000 osd.2 (osd.2) 135 : cluster [DBG] 10.5 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:17.530850+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:46.568292+0000 osd.2 (osd.2) 136 : cluster [DBG] 10.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:46.578980+0000 osd.2 (osd.2) 137 : cluster [DBG] 10.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 1630208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 137)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:46.568292+0000 osd.2 (osd.2) 136 : cluster [DBG] 10.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:46.578980+0000 osd.2 (osd.2) 137 : cluster [DBG] 10.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:18.530996+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:47.549191+0000 osd.2 (osd.2) 138 : cluster [DBG] 10.0 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:47.563345+0000 osd.2 (osd.2) 139 : cluster [DBG] 10.0 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 1630208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864993 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 139)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:47.549191+0000 osd.2 (osd.2) 138 : cluster [DBG] 10.0 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:47.563345+0000 osd.2 (osd.2) 139 : cluster [DBG] 10.0 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:19.531112+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.015295982s of 11.023313522s, submitted: 14
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 1622016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:20.531242+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:49.543706+0000 osd.2 (osd.2) 140 : cluster [DBG] 10.3 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:49.557588+0000 osd.2 (osd.2) 141 : cluster [DBG] 10.3 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 1622016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 141)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:49.543706+0000 osd.2 (osd.2) 140 : cluster [DBG] 10.3 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:49.557588+0000 osd.2 (osd.2) 141 : cluster [DBG] 10.3 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:21.531393+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 1613824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:22.531478+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:52.500942+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:52.511590+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 1613824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 143)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:52.500942+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:52.511590+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:23.531636+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 1605632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869819 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:24.531735+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:54.504195+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.12 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:54.514904+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.12 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 1581056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 145)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:54.504195+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.12 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:54.514904+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.12 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:25.531871+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 1581056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:26.531989+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 1572864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:27.532092+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 1572864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:28.532177+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:58.487954+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:58.498508+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 1572864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874647 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 147)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:58.487954+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:58.498508+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:29.532288+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:59.512031+0000 osd.2 (osd.2) 148 : cluster [DBG] 8.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:17:59.522740+0000 osd.2 (osd.2) 149 : cluster [DBG] 8.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 1540096 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.918740273s of 10.926405907s, submitted: 10
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 149)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:59.512031+0000 osd.2 (osd.2) 148 : cluster [DBG] 8.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:17:59.522740+0000 osd.2 (osd.2) 149 : cluster [DBG] 8.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:30.532407+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:00.470158+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:00.480604+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 1531904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 151)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:00.470158+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.1e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:00.480604+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.1e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:31.532535+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1523712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:32.532640+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1523712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:33.532791+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1523712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881890 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:34.532898+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:03.545005+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:03.555595+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 153)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:03.545005+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:03.555595+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1515520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:35.533040+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:05.500482+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:05.511094+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 155)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:05.500482+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:05.511094+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1515520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:36.533173+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:06.460566+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.12 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:06.471149+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.12 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 157)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:06.460566+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.12 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:06.471149+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.12 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1515520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:37.533349+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 1507328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:38.533507+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 1490944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886720 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:39.533655+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 1482752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:40.533804+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 1482752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.961461067s of 10.966039658s, submitted: 8
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:41.533957+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:11.436149+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:11.446787+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 159)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:11.436149+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:11.446787+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1474560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:42.534136+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1474560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:43.534231+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:13.372921+0000 osd.2 (osd.2) 160 : cluster [DBG] 8.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:13.383589+0000 osd.2 (osd.2) 161 : cluster [DBG] 8.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 161)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:13.372921+0000 osd.2 (osd.2) 160 : cluster [DBG] 8.1b scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:13.383589+0000 osd.2 (osd.2) 161 : cluster [DBG] 8.1b scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 1466368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891548 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:44.534393+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:14.341105+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.4 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:14.351670+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.4 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 163)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:14.341105+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.4 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:14.351670+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.4 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 1449984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:45.534537+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:15.360557+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:15.371050+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 165)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:15.360557+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:15.371050+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 1433600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:46.534672+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:16.350080+0000 osd.2 (osd.2) 166 : cluster [DBG] 8.d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:16.360670+0000 osd.2 (osd.2) 167 : cluster [DBG] 8.d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 167)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:16.350080+0000 osd.2 (osd.2) 166 : cluster [DBG] 8.d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:16.360670+0000 osd.2 (osd.2) 167 : cluster [DBG] 8.d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1425408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:47.534937+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:17.396744+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.3 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:17.406891+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.3 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 169)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:17.396744+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.3 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:17.406891+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.3 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1425408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:48.535120+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:18.362391+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:18.372990+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 171)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:18.362391+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.d scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:18.372990+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.d scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1409024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903611 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:49.535252+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:19.369390+0000 osd.2 (osd.2) 172 : cluster [DBG] 11.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:19.379970+0000 osd.2 (osd.2) 173 : cluster [DBG] 11.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 173)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:19.369390+0000 osd.2 (osd.2) 172 : cluster [DBG] 11.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:19.379970+0000 osd.2 (osd.2) 173 : cluster [DBG] 11.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1400832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:50.535407+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1400832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:51.535543+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:21.405219+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:21.415547+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 175)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:21.405219+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:21.415547+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1400832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:52.535694+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1392640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:53.535795+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1392640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908437 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:54.535889+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 1384448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.983901024s of 13.995039940s, submitted: 18
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:55.535982+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:25.431237+0000 osd.2 (osd.2) 176 : cluster [DBG] 11.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:25.441811+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 177)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:25.431237+0000 osd.2 (osd.2) 176 : cluster [DBG] 11.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:25.441811+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1376256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:56.536103+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1376256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:57.536221+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:27.391974+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:27.402589+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 179)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:27.391974+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.15 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:27.402589+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.15 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1368064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:58.536356+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1368064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 913265 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:59.536476+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1359872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:00.536595+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1359872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:01.536707+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1351680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:02.536843+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1351680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:03.536973+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:33.395159+0000 osd.2 (osd.2) 180 : cluster [DBG] 11.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:33.405727+0000 osd.2 (osd.2) 181 : cluster [DBG] 11.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 181)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:33.395159+0000 osd.2 (osd.2) 180 : cluster [DBG] 11.1a scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:33.405727+0000 osd.2 (osd.2) 181 : cluster [DBG] 11.1a scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1351680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915680 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:04.537152+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 1343488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:05.537282+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 1343488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:06.537391+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 1343488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.931067467s of 11.936884880s, submitted: 6
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:07.537478+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:37.368115+0000 osd.2 (osd.2) 182 : cluster [DBG] 8.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:37.378726+0000 osd.2 (osd.2) 183 : cluster [DBG] 8.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 183)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:37.368115+0000 osd.2 (osd.2) 182 : cluster [DBG] 8.11 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:37.378726+0000 osd.2 (osd.2) 183 : cluster [DBG] 8.11 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:08.537621+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:38.327684+0000 osd.2 (osd.2) 184 : cluster [DBG] 11.9 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:38.341768+0000 osd.2 (osd.2) 185 : cluster [DBG] 11.9 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 185)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:38.327684+0000 osd.2 (osd.2) 184 : cluster [DBG] 11.9 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:38.341768+0000 osd.2 (osd.2) 185 : cluster [DBG] 11.9 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920506 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:09.537759+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:39.286566+0000 osd.2 (osd.2) 186 : cluster [DBG] 8.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:39.300714+0000 osd.2 (osd.2) 187 : cluster [DBG] 8.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 187)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:39.286566+0000 osd.2 (osd.2) 186 : cluster [DBG] 8.2 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:39.300714+0000 osd.2 (osd.2) 187 : cluster [DBG] 8.2 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:10.537904+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1327104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:11.538023+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:41.329570+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:41.364872+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 189)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:41.329570+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.8 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:41.364872+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.8 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1318912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:12.538182+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1318912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:13.538308+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:43.275185+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:43.306965+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 191)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:43.275185+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.18 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:43.306965+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.18 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1310720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927741 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:14.538480+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1310720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:15.538576+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1302528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:16.538711+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1302528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:17.538819+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1294336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:18.538914+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1294336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927741 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.829256058s of 11.836503983s, submitted: 10
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:19.539011+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:49.204656+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:49.236623+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1294336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 193)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:49.204656+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.13 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:49.236623+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.13 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:20.539146+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:50.193039+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:50.231853+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1286144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 195)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:50.193039+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.e scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:50.231853+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.e scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:21.539305+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1286144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:22.539460+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:52.197799+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.19 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:52.240163+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.19 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1277952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 197)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:52.197799+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.19 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:52.240163+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.19 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:23.539653+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:53.232032+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.6 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:18:53.263830+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.6 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1261568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937389 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 199)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:53.232032+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.6 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:18:53.263830+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.6 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:24.539865+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 1253376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:25.539986+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 1253376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:26.540101+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 1253376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:27.540231+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1245184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:28.540337+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1245184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937389 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:29.540468+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1245184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:30.540625+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1236992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:31.540747+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1236992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:32.540877+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1220608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.025672913s of 14.031677246s, submitted: 8
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:33.540969+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:03.236287+0000 osd.2 (osd.2) 200 : cluster [DBG] 9.7 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:03.271628+0000 osd.2 (osd.2) 201 : cluster [DBG] 9.7 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1220608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939800 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 201)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:03.236287+0000 osd.2 (osd.2) 200 : cluster [DBG] 9.7 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:03.271628+0000 osd.2 (osd.2) 201 : cluster [DBG] 9.7 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:34.541112+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:04.256976+0000 osd.2 (osd.2) 202 : cluster [DBG] 9.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:04.285241+0000 osd.2 (osd.2) 203 : cluster [DBG] 9.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1220608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 203)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:04.256976+0000 osd.2 (osd.2) 202 : cluster [DBG] 9.c scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:04.285241+0000 osd.2 (osd.2) 203 : cluster [DBG] 9.c scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:35.541247+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1212416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:36.541373+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1212416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:37.541516+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1204224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:38.541621+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1187840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942211 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:39.541741+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1187840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:40.541834+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1171456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:41.541951+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:11.219304+0000 osd.2 (osd.2) 204 : cluster [DBG] 9.f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:11.258161+0000 osd.2 (osd.2) 205 : cluster [DBG] 9.f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1163264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 205)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:11.219304+0000 osd.2 (osd.2) 204 : cluster [DBG] 9.f scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:11.258161+0000 osd.2 (osd.2) 205 : cluster [DBG] 9.f scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:42.542149+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:12.258155+0000 osd.2 (osd.2) 206 : cluster [DBG] 9.17 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  will send 2025-12-13T07:19:12.282781+0000 osd.2 (osd.2) 207 : cluster [DBG] 9.17 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1163264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client handle_log_ack log(last 207)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:12.258155+0000 osd.2 (osd.2) 206 : cluster [DBG] 9.17 scrub starts
Dec 13 07:36:25 compute-0 ceph-osd[87155]: log_client  logged 2025-12-13T07:19:12.282781+0000 osd.2 (osd.2) 207 : cluster [DBG] 9.17 scrub ok
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:43.542310+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1146880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:44.542412+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1138688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:45.542540+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1138688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:46.542634+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1130496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:47.542735+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1130496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:48.542889+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1122304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:49.542999+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1122304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:50.543134+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1122304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:51.543245+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1114112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:52.543357+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1114112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:53.543497+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1097728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:54.543605+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1097728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:55.543737+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1097728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:56.543875+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1089536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:57.544041+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1089536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:58.544182+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1089536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:59.544320+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1081344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:00.544475+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1081344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:01.544599+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1081344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:02.544730+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1073152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:03.544880+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1073152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:04.545018+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1064960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:05.545137+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1064960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:06.545275+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1056768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:07.545410+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1056768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:08.545542+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1048576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:09.545679+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1048576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:10.545784+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1048576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:11.545889+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:12.546013+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:13.546138+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:14.546255+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:15.546358+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:16.546483+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:17.546600+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:18.546732+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:19.546837+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:20.546981+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:21.547094+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:22.547227+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:23.547330+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:24.547451+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:25.547581+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:26.547723+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:27.547835+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:28.547953+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:29.548100+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:30.548209+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:31.548351+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:32.548481+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:33.548579+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:34.548726+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:35.548853+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:36.548956+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:37.549107+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:38.549223+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:39.549397+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:40.549530+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:41.549675+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:42.549857+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:43.550007+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:44.550149+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:45.550311+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:46.550511+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:47.550636+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:48.550768+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:49.550925+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:50.551038+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:51.551147+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:52.551337+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:53.551472+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:54.551568+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:55.551671+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:56.551803+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:57.551920+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:58.552080+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:59.552201+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:00.552346+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:01.552512+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:02.552672+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:03.552821+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:04.552941+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:05.553054+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:06.553161+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:07.553306+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:08.553428+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:09.553612+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:10.553723+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:11.553838+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:12.553983+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:13.554122+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:14.554402+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:15.554519+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:16.554636+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:17.554782+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:18.554924+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:19.555064+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:20.555204+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 761856 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:21.555331+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 761856 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:22.555467+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 761856 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:23.555572+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 753664 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:24.555677+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 753664 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:25.555824+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:26.555931+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:27.556076+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:28.556220+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:29.556370+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:30.556539+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:31.556681+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 729088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:32.556850+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:33.557000+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:34.557147+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:35.557263+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:36.557375+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:37.557479+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:38.557636+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:39.557777+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:40.557946+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:41.558057+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:42.558210+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:43.558328+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:44.558445+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:45.558563+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:46.558714+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 655360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:47.558856+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:48.558956+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:49.559098+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:50.559235+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:51.559359+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:52.559515+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:53.559664+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:54.559812+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:55.559921+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:56.560020+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:57.560134+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:58.560236+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:59.560338+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:00.560448+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:01.560551+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:02.560667+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:03.560783+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:04.560888+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:05.560996+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:06.561114+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:07.561264+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:08.561371+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:09.561495+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:10.561591+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:11.561697+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:12.561815+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:13.561923+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:14.562018+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:15.562110+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:16.562245+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:17.562343+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:18.562461+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 524288 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:19.562552+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 524288 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:20.575392+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 516096 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:21.575591+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 516096 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:22.575760+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 507904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:23.575876+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 507904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:24.575993+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 507904 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:25.576128+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 499712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:26.576260+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 499712 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:27.576391+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:28.576518+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:29.576643+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:30.576743+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:31.576834+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:32.577004+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:33.577089+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:34.577225+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:35.577361+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:36.577501+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:37.577617+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:38.577762+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:39.577891+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:40.578009+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:41.578135+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:42.578283+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:43.578395+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:44.578534+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:45.578657+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:46.578777+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:47.578902+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:48.579008+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:49.579153+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:50.579292+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:51.579403+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:52.579521+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:53.579661+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:54.579772+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:55.579889+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:56.580000+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:57.580133+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:58.580266+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:59.580393+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:00.580507+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:01.580647+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:02.580807+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:03.580942+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:04.581056+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:05.581171+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:06.581286+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:07.581403+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:08.581526+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:09.581636+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:10.581790+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:11.581925+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:12.582109+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:13.582235+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:14.582356+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:15.582504+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:16.582618+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:17.582727+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:18.584154+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:19.584566+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:20.585558+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:21.585694+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:22.585914+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:23.586063+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:24.586226+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 278528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:25.586370+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:26.586515+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:27.586674+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:28.586811+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:29.586948+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:30.587104+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:31.587259+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:32.587488+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:33.587645+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:34.587765+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:35.587961+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:36.588098+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:37.588226+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 212992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:38.588335+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 212992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:39.588451+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:40.588601+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:41.588735+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:42.588872+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:43.588973+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:44.589076+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:45.589189+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:46.589321+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:47.589467+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:48.589581+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:49.589688+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:50.589787+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:51.589898+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:52.590093+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:53.590201+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:54.590305+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:55.590416+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:56.590485+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:57.590586+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:58.590690+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:59.590799+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:00.590910+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:01.590988+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:02.591114+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:03.591205+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:04.591311+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:05.591810+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:06.591919+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:07.592029+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:08.592147+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 98304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:09.592251+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 81920 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:10.592353+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:11.592664+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:12.592818+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:13.592971+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:14.593113+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:15.593257+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:16.593361+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:17.593477+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:18.593596+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:19.593686+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:20.593774+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:21.593866+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:22.593975+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:23.594070+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:24.594178+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:25.594286+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:26.594387+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:27.594490+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:28.594571+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:29.594669+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:30.594769+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:31.594876+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:32.594991+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:33.595139+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:34.595260+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:35.595373+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:36.595479+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:37.595574+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:38.595707+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:39.595823+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:40.595927+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:41.596108+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:42.596285+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:43.596389+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:44.596515+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:45.596614+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:46.596741+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:47.596873+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5589 writes, 24K keys, 5589 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5589 writes, 841 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5586 writes, 24K keys, 5586 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s
                                           Interval WAL: 5587 writes, 841 syncs, 6.64 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:48.596982+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:49.597114+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:50.597207+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:51.597302+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:52.597474+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:53.597594+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:54.597693+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:55.597796+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:56.597892+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:57.597994+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:58.598085+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:59.598177+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:00.598277+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:01.598415+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:02.598577+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:03.598690+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:04.598800+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:05.598941+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:06.599055+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:07.599158+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:08.599327+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:09.599505+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:10.599605+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:11.599704+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:12.599810+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:13.599948+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:14.600053+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:15.600193+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:16.600328+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:17.600496+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:18.601367+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:19.601488+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:20.601622+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:21.601748+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:22.601891+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:23.602022+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:24.602177+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:25.602269+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:26.602371+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:27.602549+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:28.602674+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:29.602789+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:30.602949+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:31.603095+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:32.603238+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:33.603396+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:34.603523+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:35.603639+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 362.667175293s of 362.673522949s, submitted: 8
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:36.603773+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1540096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:37.603925+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1540096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:38.604049+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:39.604157+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:40.604266+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:41.604366+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:42.604504+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:43.604607+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:44.604698+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:45.604824+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:46.604940+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1507328 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:47.605052+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1507328 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:48.605155+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1490944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:49.605267+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1490944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:50.605366+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1490944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:51.605475+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1482752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:52.605599+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1482752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:53.605694+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1458176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:54.605793+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 1449984 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:55.605903+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 1449984 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:56.606007+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:57.606109+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:58.606231+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:59.606329+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:00.606432+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:01.606568+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:02.606730+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:03.606862+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:04.606989+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:05.607120+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:06.607231+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:07.607329+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:08.607478+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:09.607622+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:10.607715+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:11.607840+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:12.608008+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 1359872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:13.609171+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 1343488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:14.609278+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 1343488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:15.609388+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:16.609491+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:17.610251+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:18.610358+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:19.610479+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:20.610578+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:21.610676+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 1310720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:22.610828+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 1310720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:23.610934+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:24.611040+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:25.611178+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:26.611365+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:27.611530+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:28.612033+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:29.612155+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1286144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:30.612267+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1286144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:31.612363+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1286144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:32.612496+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 1277952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:33.612612+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 1253376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:34.612729+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:35.612891+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:36.612997+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:37.613112+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:38.613218+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1228800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:39.613365+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:40.613486+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:41.613605+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:42.613726+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:43.613822+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:44.613913+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:45.614015+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:46.614113+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:47.614204+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:48.614306+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:49.614430+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:50.614642+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:51.614792+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:52.614947+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:53.615091+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:54.615208+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:55.615337+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:56.615500+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:57.615618+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:58.615732+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:59.615841+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:00.615978+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:01.616143+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:02.616306+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:03.616526+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:04.616711+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:05.616858+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:06.617012+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:07.617154+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:08.617297+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:09.617455+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:10.617611+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:11.617764+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:12.617948+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:13.618100+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:14.618252+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:15.618402+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:16.618553+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:17.618703+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:18.618841+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:19.618957+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:20.619053+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:21.619160+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:22.619278+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:23.619382+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:24.619484+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:25.619574+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:26.619699+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:27.619853+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:28.619979+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:29.620090+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:30.620227+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:31.620359+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:32.620539+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:33.620687+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:34.620878+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:35.621142+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:36.621309+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:37.621552+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:38.622105+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:39.622297+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:40.622477+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:41.622665+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:42.622916+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:43.623067+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:44.623201+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:45.623352+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:46.623503+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:47.623656+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:48.623806+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:49.623958+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:50.624110+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:51.624253+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:52.624376+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:53.624487+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:54.624609+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:55.624713+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:56.624872+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:57.625041+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:58.625216+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:59.625354+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:00.625515+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:01.625718+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:02.626090+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:03.626248+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:04.626388+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:05.626533+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:06.626718+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:07.626859+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:08.627014+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:09.627183+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:10.627325+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:11.627510+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:12.627618+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:13.627737+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:14.627871+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:15.628009+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:16.628150+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:17.628317+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:18.628449+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:19.628549+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:20.628685+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:21.628787+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:22.628906+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:23.629003+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:24.629106+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:25.629195+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:26.629307+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:27.629479+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:28.629578+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:29.629706+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:30.629813+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:31.629919+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:32.630026+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:33.630128+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:34.630238+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:35.630333+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:36.630425+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:37.630533+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:38.630646+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:39.630832+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:40.630919+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:41.631009+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:42.631114+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:43.631203+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:44.631288+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:45.631380+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:46.631484+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:47.631584+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:48.631683+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:49.631773+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:50.631858+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:51.631953+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:52.632054+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:53.632145+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:54.632241+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:55.632337+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:56.632464+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:57.632561+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:58.632655+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:59.632771+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:00.632866+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:01.632964+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:02.633076+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:03.633176+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:04.633270+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:05.633360+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:06.633471+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:07.633576+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:08.633672+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:09.633793+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:10.633889+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:11.634346+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:12.635037+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:13.635531+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:14.635669+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:15.635762+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:16.635860+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:17.635959+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:18.636067+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:19.636185+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:20.636291+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:21.636392+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:22.636483+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:23.636604+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:24.636709+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:25.636822+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:26.636922+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:27.637022+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:28.637129+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:29.637231+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:30.637333+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:31.637445+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:32.637551+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:33.637645+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:34.637735+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:35.637847+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:36.637954+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:37.638069+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:38.638174+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:39.638280+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:40.638376+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:41.638501+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:42.638642+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:43.638748+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:44.638856+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:45.638980+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:46.639079+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:47.641960+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:48.642098+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:49.642218+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:50.642358+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:51.642548+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:52.642674+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 ms_handle_reset con 0x5558b5043000 session 0x5558b746ac40
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: handle_auth_request added challenge on 0x5558b7999400
Dec 13 07:36:25 compute-0 ceph-osd[87155]: mgrc ms_handle_reset ms_handle_reset con 0x5558b3e8fc00
Dec 13 07:36:25 compute-0 ceph-osd[87155]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4292604849
Dec 13 07:36:25 compute-0 ceph-osd[87155]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4292604849,v1:192.168.122.100:6801/4292604849]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: get_auth_request con 0x5558b551c000 auth_method 0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: mgrc handle_mgr_configure stats_period=5
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:53.642838+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:54.642986+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:55.643146+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:56.643348+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:57.643593+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:58.643730+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:59.643891+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:00.644001+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:01.644147+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:02.644285+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:03.644424+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:04.644589+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:05.644718+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:06.644812+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:07.644925+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:08.645058+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:09.645185+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:10.645314+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:11.645459+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:12.645606+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:13.645737+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:14.645868+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:15.645964+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:16.646107+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:17.646235+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:18.646365+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:19.646498+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:20.646630+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:21.646755+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:22.646903+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:23.647038+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:24.647177+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:25.647275+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:26.647410+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:27.647535+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:28.647665+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 ms_handle_reset con 0x5558b718b400 session 0x5558b580aa80
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: handle_auth_request added challenge on 0x5558b8265800
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:29.647785+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:30.647903+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:31.648029+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:32.648166+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:33.648294+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:34.648428+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:35.648546+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947035 data_alloc: 218103808 data_used: 3644
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.072052002s of 300.109008789s, submitted: 90
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: handle_auth_request added challenge on 0x5558b825c400
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:36.648659+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:37.649235+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:38.649363+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:39.649474+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:40.649606+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:41.649720+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:42.649875+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:43.650007+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:44.650169+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:45.650264+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:46.650366+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:47.650504+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:48.650642+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:49.650743+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:50.650916+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:51.651055+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:52.651214+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:53.651326+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:54.651480+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:55.651617+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:56.652529+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:57.652660+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:58.652784+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:59.652940+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:00.653098+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:01.653267+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:02.653414+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:03.653556+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:04.653691+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:05.653853+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:06.653992+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:07.654131+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:08.654483+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:09.654610+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:10.654717+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:11.654818+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:12.654957+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:13.655070+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:14.655172+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:15.655299+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:16.655425+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:17.655550+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:18.655673+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:19.655811+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:20.655930+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:21.656053+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:22.656193+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:23.656286+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:24.656382+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:25.656536+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:26.656649+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:27.656746+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:28.656876+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:29.657013+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:30.657142+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:31.657278+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:32.657432+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:33.657632+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:34.657764+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:35.657913+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:36.658051+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:37.658167+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:38.658290+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:39.658388+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:40.658517+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:41.658627+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:42.658741+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:43.658848+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:44.658958+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:45.659064+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:46.659236+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:47.659351+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:48.659468+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:49.659577+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:50.659687+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:51.659821+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:52.659954+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:53.660091+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:54.660216+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:55.660316+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:56.660461+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:57.660592+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:58.660684+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:59.660800+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:00.660952+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:01.661057+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:02.661197+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:03.661339+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:04.661482+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:05.661632+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:06.661764+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:07.661882+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:08.661989+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:09.662121+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:10.662229+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:11.662361+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:12.662541+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:13.662710+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:14.662843+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:15.662972+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:16.663092+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:17.663185+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 16384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:18.663281+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-mon[74928]: from='client.14432 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:25 compute-0 ceph-mon[74928]: pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3167973010' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 07:36:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1272743291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 07:36:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3242158922' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 07:36:25 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/705410132' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:19.663371+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:20.663498+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:21.663685+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:22.663842+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:23.664000+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:24.664121+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:25.664226+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:26.664375+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:27.664536+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:28.664705+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:29.664805+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:30.664928+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:31.665059+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:32.665203+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:33.665329+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:34.665473+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:35.665567+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:36.665692+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:37.665787+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:38.665908+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:39.666009+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:40.666108+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:41.666203+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:42.666326+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:43.666454+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:44.666595+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:45.666716+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:46.666823+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:47.666924+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:48.667029+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:49.667131+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:50.667280+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:51.667387+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:52.667529+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:53.667655+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:54.667804+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:55.667933+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:56.668062+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:57.668223+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:58.668370+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:59.668503+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:00.668594+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:01.668726+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:02.668899+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:03.669065+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:04.669195+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:05.669307+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:06.669508+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:07.669673+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:08.669778+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:09.669906+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:10.670093+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:11.670179+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:12.670381+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:13.670551+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:14.670672+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:15.670831+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:16.670981+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:17.671091+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:18.671174+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:19.671350+0000)
Dec 13 07:36:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 07:36:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486468914' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:20.671474+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:21.671594+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:22.671752+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:23.671866+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:24.671996+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:25.672167+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:26.672297+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:27.672387+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:28.672559+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:29.672707+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:30.672874+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:31.672975+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:32.673100+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:33.673230+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:34.673378+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:35.673529+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:36.673632+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:37.673779+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:38.673936+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:39.674058+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:40.674174+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:41.674305+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:42.674447+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:43.674571+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:44.674752+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:45.674945+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:46.675152+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:47.675332+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:48.675502+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:49.675607+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:50.675707+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:51.675806+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:52.675935+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:53.676029+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:54.676133+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:55.676244+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:56.676373+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:57.676534+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:58.676689+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:59.676818+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:00.676969+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:01.677087+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:02.678484+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:03.678615+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:04.678723+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:05.678827+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:06.678942+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:07.679053+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:08.679171+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:09.679322+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:10.679479+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:11.679577+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:12.679728+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:13.679815+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:14.679937+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:15.680071+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:16.680174+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:17.680295+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:18.680421+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:19.680542+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:20.680679+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:21.680769+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:22.680912+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:23.681055+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:24.681204+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:25.681351+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:26.681481+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:27.681614+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:28.681748+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:29.681844+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:30.681969+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:31.682109+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:32.682270+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:33.682372+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:34.682516+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:35.682630+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:36.682763+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:37.682894+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:38.683056+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:39.683155+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:40.683288+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:41.683422+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:42.683595+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:43.683728+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:44.683831+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:45.683940+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:46.684050+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:47.684164+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5817 writes, 24K keys, 5817 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5817 writes, 955 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5558b3e3b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:48.684294+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:49.684399+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:50.684516+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:51.684604+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:52.684722+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:53.684820+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:54.684920+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:55.685022+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:56.685123+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:57.685221+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:58.685321+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:59.685411+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:00.685568+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:01.685719+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:02.685882+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:03.686015+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:04.686128+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:05.686280+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:06.686479+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:07.686600+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:08.686748+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:09.686901+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:10.687021+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:11.687171+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:12.687309+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:13.687414+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:14.687511+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:15.687947+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:16.688100+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:17.688230+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:18.688381+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:19.688509+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:20.688627+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:21.688756+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:22.688900+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:23.688990+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:24.689088+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:25.689195+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:26.689294+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:27.689401+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:28.689532+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:29.689705+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:30.689807+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:31.689966+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:32.690115+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:33.690248+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:34.690400+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:35.690519+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.917877197s of 299.925598145s, submitted: 24
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:36.690665+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:37.690810+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:38.690929+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:39.691054+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:40.691197+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:41.691323+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:42.691468+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:43.691557+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:44.691683+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:45.691808+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:46.692533+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:47.692629+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:48.692874+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:49.693040+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:50.693229+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:51.693369+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:52.693520+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:53.693638+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:54.693746+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:55.693844+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:56.693939+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:57.694027+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:58.694118+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:59.694210+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:00.694313+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:01.694414+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:02.694550+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:03.694663+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:04.694796+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:05.694954+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:06.695070+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:07.695167+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:08.695286+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:09.695394+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:10.695545+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:11.695662+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:12.695795+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:13.695914+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:14.696043+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:15.696138+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:16.696253+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:17.696342+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:18.696462+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:19.696644+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:20.696736+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:21.696864+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:22.696980+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:23.697111+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:24.697223+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:25.697312+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:26.697428+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:27.697567+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:28.697689+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:29.697816+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:30.697929+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:31.698055+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:32.698182+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:33.698325+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:34.698472+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:35.698608+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:36.698770+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:37.698867+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:38.698994+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:39.699093+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:40.699184+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:41.699280+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:42.699391+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:43.699507+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:44.699629+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:45.699751+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:46.699898+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:47.700021+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:48.700127+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:49.700232+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:50.700338+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:25 compute-0 ceph-osd[87155]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948571 data_alloc: 218103808 data_used: 5482
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:51.700457+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 327680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:52.700574+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'config diff' '{prefix=config diff}'
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'config show' '{prefix=config show}'
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 1802240 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:53.700678+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 2015232 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: tick
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_tickets
Dec 13 07:36:25 compute-0 ceph-osd[87155]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:54.700780+0000)
Dec 13 07:36:25 compute-0 ceph-osd[87155]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec6000/0x0/0x4ffc00000, data 0xac77e/0x166000, compress 0x0/0x0/0x0, omap 0x112df, meta 0x2bbed21), peers [0,1] op hist [])
Dec 13 07:36:25 compute-0 ceph-osd[87155]: do_command 'log dump' '{prefix=log dump}'
Dec 13 07:36:25 compute-0 nova_compute[241222]: 2025-12-13 07:36:25.881 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:25 compute-0 nova_compute[241222]: 2025-12-13 07:36:25.881 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:25 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:25 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 07:36:25 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 07:36:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065973922' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 07:36:26 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 07:36:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.563 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.567 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.567 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.567 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.581 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:26 compute-0 nova_compute[241222]: 2025-12-13 07:36:26.581 241226 DEBUG nova.compute.manager [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 07:36:26 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:26 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 07:36:26 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172151140' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 07:36:26 compute-0 podman[247867]: 2025-12-13 07:36:26.760459982 +0000 UTC m=+0.100722168 container health_status 1929b765dac802841eb5d5f56597ea7bfd15768bcf514c3ef50eb60bf1b13d07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 07:36:26 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='client.14440 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='client.14444 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1486468914' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3065973922' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:26 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4172151140' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 07:36:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857435989' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:36:27 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 07:36:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430319980' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 07:36:27 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14464 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 nova_compute[241222]: 2025-12-13 07:36:27.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:27 compute-0 nova_compute[241222]: 2025-12-13 07:36:27.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:27 compute-0 nova_compute[241222]: 2025-12-13 07:36:27.568 241226 DEBUG oslo_service.periodic_task [None req-d86864b1-4a5a-40d1-837d-f2d512596900 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1857435989' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1430319980' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: from='client.14464 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:27 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14468 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:27 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 13 07:36:27 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390024380' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 07:36:28 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14470 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:28 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14474 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:28 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:28 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 13 07:36:28 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464768103' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 07:36:28 compute-0 ceph-mon[74928]: from='client.14468 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/390024380' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 07:36:28 compute-0 ceph-mon[74928]: from='client.14470 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:28 compute-0 ceph-mon[74928]: from='client.14474 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:28 compute-0 ceph-mon[74928]: pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:28 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/464768103' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 07:36:28 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:28 compute-0 ceph-mgr[75200]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 07:36:28 compute-0 ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mgr-compute-0-qsherl[75196]: 2025-12-13T07:36:28.967+0000 7facc0ef1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 07:36:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 13 07:36:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235680705' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=41/41 les/c/f=42/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001789 3 0.000257
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 50 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/41 les/c/f=50/42/0 sis=49) [1] r=0 lpr=49 pi=[41,49)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:21.682066+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:15:51.221009+0000 osd.1 (osd.1) 42 : cluster [DBG] 7.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:15:51.231560+0000 osd.1 (osd.1) 43 : cluster [DBG] 7.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 4014080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 43)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:15:51.221009+0000 osd.1 (osd.1) 42 : cluster [DBG] 7.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:15:51.231560+0000 osd.1 (osd.1) 43 : cluster [DBG] 7.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:22.682244+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 4014080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe085000/0x0/0x4ffc00000, data 0xaf168/0x141000, compress 0x0/0x0/0x0, omap 0x6e32, meta 0x1a291ce), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:23.682390+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:15:53.222785+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:15:53.233343+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 4014080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe08b000/0x0/0x4ffc00000, data 0xaf168/0x141000, compress 0x0/0x0/0x0, omap 0x6e32, meta 0x1a291ce), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 45)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:15:53.222785+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:15:53.233343+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:24.682576+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 3981312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:25.682712+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 3973120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907256 data_alloc: 218103808 data_used: 10897
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000523 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000396 1 0.000040
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.154844 14 0.000167
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.160058 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.160094 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.160113 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844452858s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718498230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] exit Reset 0.000102 1 0.000128
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.844369888s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718498230s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000402 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000194 1 0.000320
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.145495 7 0.000050
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147650 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147709 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.147730 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854372978s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.729553223s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] exit Reset 0.000047 1 0.000076
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854341507s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.729553223s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.156127 14 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.161152 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.161421 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.161437 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843180656s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718505859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] exit Reset 0.000025 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.843167305s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718505859s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.127038 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.129538 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.129580 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.129596 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872672081s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748138428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.145241 7 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147866 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147900 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.147915 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854546547s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730064392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] exit Reset 0.000257 1 0.000275
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.872431755s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748138428s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000035 1 0.000040
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] exit Reset 0.000231 1 0.000262
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.854331970s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730064392s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000138 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000161
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.158256 14 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.163532 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.163568 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.163583 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841082573s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Reset 0.000023 1 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.841072083s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.147024 7 0.000028
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.149672 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.149704 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.149718 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852649689s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730140686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] exit Reset 0.000021 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852640152s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730140686s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.129103 1 0.000016
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.131356 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.131396 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.131411 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870583534s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748161316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] exit Reset 0.000019 1 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870573044s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748161316s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.129356 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.131546 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.131656 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.131672 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870308876s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] exit Reset 0.000035 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.870298386s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748184204s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.158862 14 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.164169 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.164250 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.164267 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840455055s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718482971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] exit Reset 0.000023 1 0.000069
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.840443611s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718482971s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.147603 7 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.150189 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.150219 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.150233 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852058411s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730171204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] exit Reset 0.000020 1 0.000045
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.852048874s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730171204s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130041 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132219 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.132258 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.132286 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869616508s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Reset 0.000025 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869601250s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.159690 14 0.000097
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.164972 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.165011 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.165025 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839732170s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718414307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839722633s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718414307s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.148224 7 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.150708 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.150744 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.150785 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851435661s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] exit Reset 0.000023 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.851426125s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730194092s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130215 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132428 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.132466 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.132488 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869359016s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748191833s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Reset 0.000018 1 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869349480s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748191833s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.160334 14 0.000546
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.165404 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.165435 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.165449 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839279175s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Reset 0.000021 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839269638s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130631 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132753 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.132786 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.132802 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.869021416s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748207092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] exit Reset 0.000034 1 0.000060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868998528s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748207092s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.160821 14 0.000751
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.165677 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.165709 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.165726 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839054108s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718353271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] exit Reset 0.000036 1 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.839038849s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718353271s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.148840 7 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.151261 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.151305 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.151320 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850866318s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730262756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] exit Reset 0.000019 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850856781s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730262756s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.130839 1 0.000141
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.132983 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133015 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133028 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868752480s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748222351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] exit Reset 0.000017 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868742943s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748222351s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.148968 7 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.151414 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.151451 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.151466 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850701332s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Reset 0.000019 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850690842s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131100 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133129 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133163 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133177 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868589401s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748245239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868580818s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748245239s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.149167 7 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.151593 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.151627 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.151642 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850529671s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730285645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Reset 0.000019 1 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.850520134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730285645s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.161431 14 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.166370 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.166408 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.166421 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838470459s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718345642s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] exit Reset 0.000019 1 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838461876s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718345642s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131392 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133017 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133114 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133362 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868296623s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748268127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] exit Reset 0.000036 1 0.000052
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868269920s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748268127s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.160914 14 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.166137 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.166622 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.166637 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838410378s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718490601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Reset 0.000019 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838400841s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718490601s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.161123 14 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.166479 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.166778 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.166792 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838177681s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718376160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] exit Reset 0.000048 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.838152885s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718376160s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131732 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133343 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133379 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133394 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867964745s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748283386s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867954254s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748283386s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.149776 7 0.000195
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.152013 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.152046 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.152060 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849902153s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730323792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] exit Reset 0.000018 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849894524s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730323792s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131890 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133445 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133478 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133492 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867810249s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748298645s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] exit Reset 0.000017 1 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867800713s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748298645s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.162178 14 0.000152
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.167229 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.167301 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.167314 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837768555s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718368530s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Reset 0.000017 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.837759018s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718368530s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.150006 7 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.152168 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.152206 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.152219 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849675179s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730346680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] exit Reset 0.000024 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849659920s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730346680s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.131858 1 0.000028
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133555 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133619 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133636 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.868008614s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Reset 0.000018 1 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867999077s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.162521 14 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.167680 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.167718 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.167732 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132568 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.135033 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.135076 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.135091 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867127419s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748153687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] exit Reset 0.000036 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867115974s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748153687s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active+clean] exit Started/Primary/Active/Clean 7.150427 7 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] exit Started/Primary/Active 7.152517 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] exit Started/Primary 7.152551 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] exit Started 7.152564 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 43'551 mlcod 43'551 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849251747s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 active pruub 109.730361938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] exit Reset 0.000025 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.849235535s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY pruub 109.730361938s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132270 1 0.000127
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133712 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133877 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133891 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867587090s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Reset 0.000040 1 0.000055
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867555618s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748779297s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.162931 14 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.168137 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.168185 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.168209 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836945534s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718269348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] exit Reset 0.000019 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836936951s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718269348s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132446 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133770 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133812 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133827 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867397308s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748802185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] exit Reset 0.000032 1 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867383003s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748802185s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132196 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133306 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133768 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133781 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867756844s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] exit Reset 0.000018 1 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867748260s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749267578s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.150859 7 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.152855 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.152892 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.152905 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848821640s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730400085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] exit Reset 0.000016 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] exit Start 0.000011 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848813057s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730400085s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.163255 14 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.168361 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.168423 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.168438 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132769 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.133900 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.133939 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.133955 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867094994s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748840332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] exit Reset 0.000022 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867082596s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748840332s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.163662 14 0.000049
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.168911 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.168944 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.168957 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836301804s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718193054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.151237 7 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.153162 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.153193 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.153205 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] exit Reset 0.000036 1 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.836292267s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718193054s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132824 1 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134194 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134250 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134265 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867068291s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] exit Reset 0.000021 1 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.867057800s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749168396s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.164020 14 0.000252
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.169416 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.169450 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.169468 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835968971s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718185425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] exit Reset 0.000019 1 0.000051
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835960388s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718185425s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133012 1 0.000035
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134360 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134397 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134413 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866882324s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] exit Reset 0.000018 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866874695s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749176025s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.151621 7 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.153481 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.153516 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.153528 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848036766s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730422974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] exit Reset 0.000023 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848023415s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730422974s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.164302 14 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.169803 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.169843 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.169858 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835701942s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718177795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] exit Reset 0.000018 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835692406s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718177795s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133252 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134472 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134514 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134622 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866633415s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749198914s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] exit Reset 0.000023 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866624832s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749198914s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133396 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134608 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134653 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134666 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866514206s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749183655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] exit Reset 0.000023 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866498947s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749183655s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.167440 14 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.170235 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.170285 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.170301 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832565308s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715339661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] exit Reset 0.000019 1 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.832555771s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715339661s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.151096 7 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.153694 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.153733 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.153753 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848815918s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.731674194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] exit Reset 0.000040 1 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.848808289s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.731674194s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.133652 1 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134843 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134878 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134891 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866249084s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.749206543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] exit Reset 0.000035 1 0.000049
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866222382s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.749206543s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.164683 14 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.169988 0 0.000000
Dec 13 07:36:29 compute-0 crontab[248307]: (root) LIST (root)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.170044 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.170061 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835205078s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Reset 0.000018 1 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.835196495s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.132977 1 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.134797 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.134853 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.134869 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866911888s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.750068665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] exit Reset 0.000020 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.866901398s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.750068665s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.152461 7 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.154179 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.154213 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.154226 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847294807s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.847285271s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730545044s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.168061 14 0.000073
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.170824 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.170888 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834938049s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Reset 0.001858 1 0.001876
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.834763527s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846091270s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 active pruub 109.730407715s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] exit Reset 0.001006 1 0.002389
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.846067429s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 109.730407715s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008280 2 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010610 2 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.009459 2 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 9.170912 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=45) [1] r=0 lpr=45 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830695152s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.715332031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] exit Reset 0.000030 1 0.001343
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.830681801s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.715332031s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833575249s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 active pruub 115.718284607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Reset 0.003823 1 0.003836
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=14.833558083s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 115.718284607s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.136471 1 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.138984 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.139048 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 5.139074 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863280296s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 active pruub 111.748130798s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] exit Reset 0.000029 1 0.000067
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.863265038s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY pruub 111.748130798s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008039 2 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007702 2 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007314 2 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007206 2 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006710 2 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006215 2 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006138 2 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006461 2 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 51 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:26.682819+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 3375104 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 51 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904077 3 0.000028
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.904112 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000041 1 0.000063
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904702 3 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.904732 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903099 2 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.914142 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.904808 6 0.000421
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 1 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905128 6 0.000052
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902542 2 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910668 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903479 2 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.911842 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905503 6 0.000063
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.905647 3 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.905669 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.903736 3 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.903753 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000049 1 0.000071
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000029 1 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000148 1 0.000159
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906525 6 0.000056
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906732 3 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.906755 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000933 1 0.000942
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000025 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 1 0.000024
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906344 6 0.000051
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902964 2 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910755 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902840 2 0.000024
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910215 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902852 2 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910109 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907968 6 0.000028
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908185 3 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.908199 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907855 3 0.000028
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.907969 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY mbc={}] exit Started/Stray 0.907554 3 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY mbc={}] exit Started 0.907573 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000030 1 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.898652 2 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904929 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.898603 2 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904797 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908710 6 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000152 1 0.000251
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908953 6 0.000488
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 1 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909133 3 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909149 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909328 6 0.000056
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909457 3 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909479 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909598 6 0.000066
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909695 3 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.909710 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 1 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000038 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903486 2 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910264 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910482 3 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.910502 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000024 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 1 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000959 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911869 3 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.911883 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911348 6 0.000057
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911269 3 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.911287 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000019 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.904869 2 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.914551 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 1 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000030 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913458 3 0.000305
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.913478 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.899315 2 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.905835 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000023 1 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914317 3 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 1 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.914333 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=-1 lpr=51 pi=[47,51)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000033 1 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 1 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.001015 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003233 2 0.000259
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909417 7 0.000051
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907950 7 0.000063
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908644 7 0.000052
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909642 7 0.000050
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906263 7 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909766 7 0.000148
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.908770 7 0.000024
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.916859 7 0.000234
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909022 7 0.000117
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907858 7 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.907518 7 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913146 7 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912466 7 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911735 7 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003062 2 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906137 7 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910350 7 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910892 7 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002835 2 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002702 2 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003374 2 0.000692
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912413 7 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002173 2 0.000016
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910230 7 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.910789 7 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000358 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000431 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000434 1 0.000009
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.906781 7 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.914099 7 0.000049
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909001 7 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912748 7 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000495 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000671 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000726 1 0.000008
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000992 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001250 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004414 4 0.000078
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.10( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004713 4 0.000488
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004394 4 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003871 4 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.13( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.11( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.1a( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003878 4 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003836 4 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.19( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.6( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003671 4 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003652 4 0.000024
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.2( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003220 4 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.f( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003020 4 0.000040
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002971 4 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001463 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000109 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.000041 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.b( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000078 1 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 lc 43'17 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913643 7 0.000063
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912474 7 0.000050
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.912819 7 0.000056
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.913602 7 0.000060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.915229 7 0.000055
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917095 7 0.000074
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.917488 7 0.000381
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.911429 7 0.000050
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.909765 7 0.000094
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.004166 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.007426 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.912297 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.011331 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014412 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.919951 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.018737 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.021594 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1c( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.928140 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026096 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.028820 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.935183 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033449 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036847 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.11( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.941994 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.040787 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.042982 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.952598 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.048131 1 0.000008
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.048510 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1b( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.957946 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.055443 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.055899 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.12( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.963868 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.062814 1 0.000011
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.063266 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.972925 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069987 1 0.000016
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.070682 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.976961 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.077464 1 0.000172
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.078141 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.986803 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.084656 1 0.000111
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085492 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.18( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.995275 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.091900 1 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.092921 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.001707 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.099009 1 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.100279 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.009324 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.116569 2 0.000027
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.12( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.116718 2 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 lc 43'57 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.158600 3 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.158619 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.228964 3 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.228978 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:27.682909+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.129001 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[10.14( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [1] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247234 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247326 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247375 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247438 1 0.000007
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247427 1 0.000120
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247445 1 0.000009
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247474 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247412 1 0.000127
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247453 1 0.000252
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247452 1 0.000009
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247483 1 0.000011
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247376 1 0.000011
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247290 1 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247302 1 0.000009
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.247322 1 0.000009
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246440 2 0.000010
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246432 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246456 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246491 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246509 1 0.000011
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246547 1 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246550 1 0.000008
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246578 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246597 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.246720 1 0.000068
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.091966 1 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.021246 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066959 1 0.000060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.314247 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.221782 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.316796 3 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.316833 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000039 1 0.000069
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074203 1 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.321560 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.234724 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081422 1 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.328822 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.241303 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088773 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.336231 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247982 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096159 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.343606 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.249869 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103497 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.350959 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.261325 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110996 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.358495 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.269404 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118273 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.365703 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.278249 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125661 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.373381 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.290258 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133009 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.380479 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.290726 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140341 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.387850 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.298655 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.147781 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.395185 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.301985 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.155100 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.402433 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.316582 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162511 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.409837 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.318856 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169867 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.417210 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.329976 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177203 1 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.425186 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.333061 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184338 1 0.000035
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.430802 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.344482 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0xb1113/0x144000, compress 0x0/0x0/0x0, omap 0x70a5, meta 0x2bc8f5b), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191667 1 0.000028
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.438148 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.3( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.350663 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.199048 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.445565 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.8( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.358403 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206393 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.452925 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.15( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.370057 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213775 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.460345 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.d( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.373978 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 2138112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.221199 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.467773 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.1a( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.379218 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.228579 1 0.000016
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.475176 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.392691 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235935 1 0.000010
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.482552 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.392335 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.243271 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.490018 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.2( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.405266 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.576270 3 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.576286 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000030 1 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.332083 2 0.000059
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started/ToDelete 0.424068 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[11.9( v 43'2 (0'0,43'2] lb MIN local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=-1 lpr=51 pi=[49,51)/1 pct=0'0 crt=43'2 lcod 0'0 active mbc={}] exit Started 1.492041 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.339557 2 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.360821 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.501171 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.288211 2 0.000090
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.288284 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.514102 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.103152 2 0.000060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.103209 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.587491 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.677610 3 0.002062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.679698 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000071
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 DELETING pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.014242 2 0.000062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started/ToDelete 0.014325 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] lb MIN local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=-1 lpr=51 pi=[45,51)/1 pct=0'0 crt=36'6 lcod 0'0 active mbc={}] exit Started 1.602757 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993105 4 0.000040
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993170 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993316 4 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993367 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993474 4 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993525 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993496 4 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993548 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993772 4 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993830 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992915 4 0.001060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994022 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994254 4 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994337 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994703 4 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994753 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994514 4 0.000095
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994630 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994776 4 0.000045
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994838 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995206 4 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995259 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994676 4 0.000778
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995600 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995542 4 0.000068
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995637 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996133 4 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996202 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994665 4 0.000985
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.996590 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994995 4 0.000918
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.995060 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:28.683045+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:15:58.205745+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:15:58.216370+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 53 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.333954 5 0.000162
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.333077 5 0.000082
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000062 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 13 07:36:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203291855' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.334830 5 0.000263
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000999 1 0.000018
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.333883 5 0.000208
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.334871 5 0.000207
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.335069 5 0.000112
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.334962 5 0.000184
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.334687 5 0.000115
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.334419 5 0.000066
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.334468 5 0.000082
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.334955 5 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.334233 5 0.000074
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.335468 5 0.000321
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.334609 5 0.000085
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.334668 5 0.000108
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 48'552 (0'0,48'552] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.335415 5 0.000155
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042456 2 0.000016
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.043605 1 0.000085
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000287 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052536 2 0.000145
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.095593 1 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000444 1 0.000138
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031187 2 0.000075
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.127162 1 0.000011
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000312 1 0.000050
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 2015232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038373 2 0.000060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.165909 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000330 1 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059552 2 0.000031
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.225865 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000296 1 0.000058
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059557 2 0.000084
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.285812 1 0.000019
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000356 1 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.345758 1 0.000243
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059634 2 0.000056
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000319 1 0.000052
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066567 2 0.000089
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.412841 1 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000332 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038276 2 0.000063
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.451528 1 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000419 1 0.000067
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.490319 1 0.000009
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038409 2 0.000066
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000383 1 0.000083
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031232 2 0.000055
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.522018 1 0.000012
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000346 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.024296 2 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.546726 1 0.000008
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000395 1 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.578437 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031328 2 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000331 1 0.000030
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066634 2 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.645448 1 0.000013
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000533 1 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 53 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 47)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:15:58.205745+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:15:58.216370+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086425781s of 10.170964241s, submitted: 539
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.092938 1 0.000291
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007001 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.000195 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.000285 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.576459 1 0.000069
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327981949s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118011475s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] exit Reset 0.000109 1 0.000321
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327939034s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118011475s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.386212 1 0.000139
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007216 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.000775 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.000790 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327603340s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117866516s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] exit Reset 0.000047 1 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327571869s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117866516s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.326330 1 0.000223
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007270 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001109 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001121 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327441216s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117881775s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] exit Reset 0.000048 1 0.000062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.327405930s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117881775s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.544511 1 0.000173
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007733 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001137 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007693 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001278 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001171 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326743126s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117576599s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] exit Reset 0.000167 1 0.001053
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326668739s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117576599s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.150692 1 0.000067
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007696 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002042 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002058 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326932907s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Reset 0.000032 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326913834s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.260221 1 0.000133
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007807 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.002568 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.002581 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326637268s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117897034s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] exit Reset 0.000038 1 0.000053
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.326610565s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117897034s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001310 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323873520s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116912842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.446802 1 0.000076
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009899 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.003935 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.003952 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324602127s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117736816s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] exit Reset 0.000089 1 0.002402
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324528694s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117736816s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.029552 1 0.000183
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009680 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004948 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004961 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324680328s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118041992s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] exit Reset 0.000046 1 0.000061
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324645996s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118041992s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.506827 1 0.000100
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] exit Reset 0.000529 1 0.003572
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009348 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004455 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004478 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.324029922s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117660522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] exit Reset 0.000189 1 0.002904
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] exit Start 0.000080 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323390007s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116912842s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] exit Start 0.000093 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323875427s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117660522s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.185243 1 0.000223
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010369 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.005993 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.006013 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323761940s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117958069s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.224417 1 0.000101
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010423 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.006073 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.006091 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323680878s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.117927551s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] exit Reset 0.000066 1 0.000089
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323638916s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117927551s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Reset 0.000239 1 0.000438
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323557854s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.117958069s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.129458 1 0.000094
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010479 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.633818 1 0.000068
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010643 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.007099 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.006878 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.007160 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.006932 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322302818s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.116996765s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323290825s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 active pruub 119.118003845s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] exit Reset 0.000184 1 0.000323
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] exit Reset 0.000176 1 0.000443
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] exit Start 0.000385 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] exit Start 0.000413 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.322153091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.116996765s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.323190689s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118003845s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.074670 5 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.720099 4 0.000148
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000473 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052330 2 0.000074
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:29.683208+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 2187264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.905892 1 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.014385 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.009026 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.009050 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320955276s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 active pruub 119.118865967s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] exit Reset 0.000080 1 0.000131
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] exit Start 0.000023 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.320901871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 119.118865967s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.959013 1 0.000077
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary/Active 2.014522 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started/Primary 3.009519 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] exit Started 3.009552 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=48'552 lcod 53'553 mlcod 53'553 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319732666s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 active pruub 119.118080139s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] exit Reset 0.000132 1 0.000380
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] exit Start 0.000106 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.319633484s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY pruub 119.118080139s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009017 7 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012167 7 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011180 7 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008006 7 0.000052
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012012 7 0.000443
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011592 7 0.000055
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009299 7 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000091 1 0.000050
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.008221 7 0.000044
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007177 7 0.000651
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000132 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000460 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007708 7 0.000539
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009316 7 0.000327
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009085 7 0.000333
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012104 7 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013130 7 0.000052
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000775 1 0.000077
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000834 1 0.000012
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000851 1 0.000020
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000945 1 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000920 1 0.000096
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000645 1 0.000405
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000522 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000556 1 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000547 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000580 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000590 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097529 2 0.000195
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097685 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.106756 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.223121 2 0.000339
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.223524 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.235716 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:30.683344+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.289672 2 0.000170
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.290171 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.301370 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.326204 2 0.000088
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.327016 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.335110 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.385467 2 0.000116
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.386332 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.398361 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.414992 2 0.000177
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.415902 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.427522 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 2998272 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765829 data_alloc: 218103808 data_used: 6167
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.474172 2 0.000090
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.475152 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.484491 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.496276 2 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.497223 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.504954 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.533366 2 0.000101
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.534118 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.542659 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.577932 2 0.000183
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.578522 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.586734 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.629580 2 0.000087
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.630166 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.639706 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.666566 2 0.000080
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.667159 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.676424 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.696100 2 0.000086
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.696724 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.708876 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.725663 2 0.000096
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.726304 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=-1 lpr=54 pi=[47,54)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.739467 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 55 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0xb6bc9/0x130000, compress 0x0/0x0/0x0, omap 0x7ae9, meta 0x2bc8517), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:31.683489+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] exit Started/Stray 1.455963 6 0.000266
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.456680 6 0.000102
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000690 2 0.000082
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=48'552 lcod 53'553 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000698 2 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[47,55)/1 crt=53'554 lcod 53'553 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075239 2 0.000242
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'554 lcod 53'553 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=53'554 lcod 53'553 unknown NOTIFY mbc={}] exit Started 1.532140 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.127108 2 0.000150
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127849 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=-1 lpr=55 pi=[47,55)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.584576 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:32.683635+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:33.683754+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:34.683863+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:35.683978+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 690212 data_alloc: 218103808 data_used: 5223
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fcef9000/0x0/0x4ffc00000, data 0xb9e04/0x12f000, compress 0x0/0x0/0x0, omap 0x7fe9, meta 0x2bc8017), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:36.684070+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 3072000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:37.684169+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 3072000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:38.684287+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 3063808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:39.684425+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 3055616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.015730858s of 11.047432899s, submitted: 87
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:40.684537+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698528 data_alloc: 218103808 data_used: 5223
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:41.684694+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112061 39 0.000079
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114811 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114960 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.115003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887648582s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730354309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112179 39 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114523 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114571 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.114597 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] exit Reset 0.000136 1 0.000405
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887540817s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730354309s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887639046s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730476379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] exit Reset 0.000061 1 0.000111
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887612343s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730476379s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112549 39 0.000061
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114748 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114778 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.114792 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887332916s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730545044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] exit Reset 0.000030 1 0.000058
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887315750s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730545044s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.112727 39 0.000065
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.114741 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.114773 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.114787 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887020111s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 active pruub 125.730529785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] exit Reset 0.000026 1 0.000089
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 61 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.887008667s) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 125.730529785s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 61 handle_osd_map epochs [60,61], i have 61, src has [1,61]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fcef0000/0x0/0x4ffc00000, data 0xbf0d8/0x138000, compress 0x0/0x0/0x0, omap 0x87af, meta 0x2bc7851), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:42.684848+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208160 3 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208811 3 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.208847 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.208212 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208544 3 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.208561 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.208994 3 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.209016 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=61) [2] r=-1 lpr=61 pi=[47,61)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000028 1 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000156 1 0.000170
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000048
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000393 1 0.000463
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000039 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000580 1 0.000606
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000132
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000132
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 62 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 2990080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:43.684956+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:13.306836+0000 osd.1 (osd.1) 48 : cluster [DBG] 7.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:13.317467+0000 osd.1 (osd.1) 49 : cluster [DBG] 7.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999981 4 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000361 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999660 4 0.000208
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999943 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000385 4 0.000057
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000693 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000409 4 0.000176
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000797 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001325 5 0.000893
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000059 1 0.000062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000688 1 0.000035
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.001900 5 0.000984
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.002809 5 0.000552
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.002092 5 0.000645
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028370 2 0.000014
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.028223 1 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000333 1 0.000124
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045309 2 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.073549 1 0.000093
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000330 1 0.000036
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059595 2 0.000054
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.133536 1 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000376 1 0.000037
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045358 2 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 63 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 2981888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 49)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:13.306836+0000 osd.1 (osd.1) 48 : cluster [DBG] 7.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:13.317467+0000 osd.1 (osd.1) 49 : cluster [DBG] 7.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:44.685165+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:14.266589+0000 osd.1 (osd.1) 50 : cluster [DBG] 3.1c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:14.277056+0000 osd.1 (osd.1) 51 : cluster [DBG] 3.1c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.973185 1 0.000204
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004047 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004475 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004505 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997287750s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.053726196s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.867717 1 0.000095
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004211 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004177 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004257 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997888565s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054489136s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] exit Reset 0.000069 1 0.000091
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.927854 1 0.000075
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003961 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004736 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004752 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997774124s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.054504395s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] exit Reset 0.000046 1 0.000065
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997743607s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054504395s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.822257 1 0.000053
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003775 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.004670 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.004741 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[47,62)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998094559s) [2] async=[2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 active pruub 134.055038452s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] exit Reset 0.000040 1 0.000055
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] exit Start 0.000024 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.998066902s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.055038452s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.997838974s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.054489136s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] exit Reset 0.002416 1 0.002497
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] exit Start 0.000090 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 64 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64 pruub=14.994911194s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 134.053726196s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 2973696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fcee4000/0x0/0x4ffc00000, data 0xc5e36/0x144000, compress 0x0/0x0/0x0, omap 0x91e3, meta 0x2bc6e1d), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 51)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:14.266589+0000 osd.1 (osd.1) 50 : cluster [DBG] 3.1c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:14.277056+0000 osd.1 (osd.1) 51 : cluster [DBG] 3.1c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:45.685329+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 2891776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 714440 data_alloc: 218103808 data_used: 5223
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.594760 6 0.000065
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.595186 6 0.001628
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.592906 6 0.000203
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.595168 6 0.000042
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000632 2 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000582 2 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000610 2 0.000021
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000577 2 0.000017
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.079113 2 0.000099
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.079803 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.1e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.674628 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138243 2 0.000087
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.138859 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.e( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.734069 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.167940 2 0.000065
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.168581 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.16( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=6 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.761626 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 DELETING pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212152 2 0.000081
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.212778 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 65 pg[9.6( v 43'551 (0'0,43'551] lb MIN local-lis/les=62/63 n=7 ec=47/37 lis/c=62/47 les/c/f=63/48/0 sis=64) [2] r=-1 lpr=64 pi=[47,64)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.807987 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:46.685487+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:16.303472+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:16.521490+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 53)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:16.303472+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:16.521490+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 3112960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 28.225575 55 0.000080
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 28.227833 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.227867 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.227885 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774515152s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730743408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] exit Reset 0.000057 1 0.000092
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774483681s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730743408s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 28.225661 55 0.000084
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 28.227941 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.227976 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.227993 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.774012566s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 active pruub 133.730667114s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] exit Reset 0.000032 1 0.000317
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 66 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66 pruub=11.773996353s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 133.730667114s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:47.685685+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.093958 3 0.000352
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.094289 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000048 1 0.000068
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.094026 3 0.000023
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.094050 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=66) [2] r=-1 lpr=66 pi=[47,66)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000046
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000716 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004060 2 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003766 2 0.000749
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 67 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 3088384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:48.685802+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 68 handle_osd_map epochs [67,68], i have 68, src has [1,68]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997613 3 0.000057
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001436 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998112 3 0.000127
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002294 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.003261 5 0.000149
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.003521 5 0.000158
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000063 1 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000370 1 0.000015
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.042958 1 0.000064
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042547 2 0.000033
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000408 1 0.000039
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052408 2 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 68 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 3047424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fceda000/0x0/0x4ffc00000, data 0xcc904/0x14a000, compress 0x0/0x0/0x0, omap 0x9bf8, meta 0x2bc6408), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:49.685961+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:19.311676+0000 osd.1 (osd.1) 54 : cluster [DBG] 5.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:19.322268+0000 osd.1 (osd.1) 55 : cluster [DBG] 5.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 55)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:19.311676+0000 osd.1 (osd.1) 54 : cluster [DBG] 5.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:19.322268+0000 osd.1 (osd.1) 55 : cluster [DBG] 5.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.922536 1 0.000070
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022017 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.023478 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.024227 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.975432 1 0.000138
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.021797 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.024102 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.024126 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981441498s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056365967s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] exit Reset 0.000105 1 0.000178
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981397629s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056365967s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[47,67)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.981289864s) [2] async=[2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 active pruub 139.056335449s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] exit Reset 0.000494 1 0.000527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 69 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69 pruub=14.980957031s) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 139.056335449s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:50.686130+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 3006464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688308 data_alloc: 218103808 data_used: 4393
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.804404259s of 10.846774101s, submitted: 74
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.608874 6 0.000075
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.609287 6 0.000059
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000423 1 0.000062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000945 2 0.000079
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060206 3 0.000124
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060681 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.8( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=7 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.670036 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104130 2 0.000093
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105121 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 70 pg[9.18( v 43'551 (0'0,43'551] lb MIN local-lis/les=67/68 n=6 ec=47/37 lis/c=67/47 les/c/f=68/48/0 sis=69) [2] r=-1 lpr=69 pi=[47,69)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.714042 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:51.686239+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcedd000/0x0/0x4ffc00000, data 0xce37d/0x14d000, compress 0x0/0x0/0x0, omap 0x9e91, meta 0x2bc616f), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 3112960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:52.686394+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 3112960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcedc000/0x0/0x4ffc00000, data 0xcfbbc/0x14c000, compress 0x0/0x0/0x0, omap 0xa12c, meta 0x2bc5ed4), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:53.686529+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 3104768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:54.686671+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 3104768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:55.686784+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:25.334504+0000 osd.1 (osd.1) 56 : cluster [DBG] 4.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:25.345079+0000 osd.1 (osd.1) 57 : cluster [DBG] 4.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 57)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:25.334504+0000 osd.1 (osd.1) 56 : cluster [DBG] 4.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:25.345079+0000 osd.1 (osd.1) 57 : cluster [DBG] 4.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 3096576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668873 data_alloc: 218103808 data_used: 3865
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:56.686968+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:26.328130+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:26.338666+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 59)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:26.328130+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:26.338666+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 70 handle_osd_map epochs [71,72], i have 70, src has [1,72]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 3072000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:57.687138+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 3063808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:58.687285+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fced8000/0x0/0x4ffc00000, data 0xd32f4/0x152000, compress 0x0/0x0/0x0, omap 0xa387, meta 0x2bc5c79), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 39.500612 75 0.000351
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 39.503075 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 39.503110 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 39.503131 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499458313s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730636597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] exit Reset 0.000066 1 0.000094
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499427795s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730636597s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 39.500667 75 0.000135
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 39.502576 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 39.502609 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 39.502625 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=47) [1] r=0 lpr=47 crt=43'551 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499320030s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 active pruub 141.730758667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] exit Reset 0.000031 1 0.000057
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 73 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73 pruub=8.499304771s) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 141.730758667s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 3031040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:59.687459+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.888488 3 0.000035
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.888595 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000194 1 0.000297
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000046 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.889228 3 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.889254 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=73) [2] r=-1 lpr=73 pi=[47,73)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000035 1 0.000054
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000123
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 74 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 3014656 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:00.687603+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 75 handle_osd_map epochs [74,75], i have 75, src has [1,75]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002827 4 0.000053
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002952 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003599 4 0.000149
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003794 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=47/48 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.035962 5 0.000480
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000043 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.035983 5 0.000184
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000797 1 0.000010
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035528 2 0.000061
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.036073 1 0.000034
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000304 1 0.000051
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066501 2 0.000051
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 75 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 2990080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 686792 data_alloc: 218103808 data_used: 3865
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fced0000/0x0/0x4ffc00000, data 0xd6943/0x158000, compress 0x0/0x0/0x0, omap 0xa8c5, meta 0x2bc573b), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.037201881s of 10.051774025s, submitted: 64
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:01.687748+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:31.388138+0000 osd.1 (osd.1) 60 : cluster [DBG] 6.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:31.402293+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 61)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:31.388138+0000 osd.1 (osd.1) 60 : cluster [DBG] 6.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:31.402293+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.931866 1 0.000154
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004682 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.007647 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.007724 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031506538s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Reset 0.000074 1 0.000107
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031457901s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.865321 1 0.000074
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004362 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.008178 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.008258 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[47,74)/1 crt=43'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031195641s) [2] async=[2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 active pruub 151.159759521s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Reset 0.000126 1 0.000150
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 76 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76 pruub=15.031086922s) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY pruub 151.159759521s@ mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 2949120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:02.687892+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:32.381311+0000 osd.1 (osd.1) 62 : cluster [DBG] 6.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:32.391893+0000 osd.1 (osd.1) 63 : cluster [DBG] 6.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 63)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:32.381311+0000 osd.1 (osd.1) 62 : cluster [DBG] 6.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:32.391893+0000 osd.1 (osd.1) 63 : cluster [DBG] 6.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007325 7 0.000059
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007839 7 0.000049
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000120
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000050 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075016 2 0.000170
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075174 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.1c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=6 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.082566 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111936 2 0.000093
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112026 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 77 pg[9.c( v 43'551 (0'0,43'551] lb MIN local-lis/les=74/75 n=7 ec=47/37 lis/c=74/47 les/c/f=75/48/0 sis=76) [2] r=-1 lpr=76 pi=[47,76)/1 crt=43'551 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.119901 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 2924544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:03.688060+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:33.346831+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:33.357425+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 65)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:33.346831+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:33.357425+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 2924544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fceca000/0x0/0x4ffc00000, data 0xdb83f/0x15e000, compress 0x0/0x0/0x0, omap 0xb06a, meta 0x2bc4f96), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:04.688208+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 2924544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:05.688319+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fceca000/0x0/0x4ffc00000, data 0xdb83f/0x15e000, compress 0x0/0x0/0x0, omap 0xb06a, meta 0x2bc4f96), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 2908160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679412 data_alloc: 218103808 data_used: 4032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:06.688460+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:36.404237+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:36.414805+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 67)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:36.404237+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:36.414805+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 2899968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:07.688646+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 2891776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:08.688792+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 2891776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 78 handle_osd_map epochs [80,80], i have 78, src has [1,80]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 78 handle_osd_map epochs [79,80], i have 78, src has [1,80]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:09.688932+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:39.440980+0000 osd.1 (osd.1) 68 : cluster [DBG] 6.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:39.455187+0000 osd.1 (osd.1) 69 : cluster [DBG] 6.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 69)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:39.440980+0000 osd.1 (osd.1) 68 : cluster [DBG] 6.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:39.455187+0000 osd.1 (osd.1) 69 : cluster [DBG] 6.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 2883584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:10.689097+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:40.435718+0000 osd.1 (osd.1) 70 : cluster [DBG] 6.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:40.446276+0000 osd.1 (osd.1) 71 : cluster [DBG] 6.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 71)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:40.435718+0000 osd.1 (osd.1) 70 : cluster [DBG] 6.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:40.446276+0000 osd.1 (osd.1) 71 : cluster [DBG] 6.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 2875392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 690541 data_alloc: 218103808 data_used: 4032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xe0b13/0x167000, compress 0x0/0x0/0x0, omap 0xb571, meta 0x2bc4a8f), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277196884s of 10.296418190s, submitted: 33
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:11.689234+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 2875392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:12.689375+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:42.430942+0000 osd.1 (osd.1) 72 : cluster [DBG] 6.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:42.441531+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 73)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:42.430942+0000 osd.1 (osd.1) 72 : cluster [DBG] 6.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:42.441531+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 2867200 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:13.689587+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 2859008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcebb000/0x0/0x4ffc00000, data 0xe424b/0x16d000, compress 0x0/0x0/0x0, omap 0xba7c, meta 0x2bc4584), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:14.689728+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:44.402957+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:44.413618+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 75)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:44.402957+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:44.413618+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 2859008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:15.689867+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:45.438988+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:45.449596+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 77)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:45.438988+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:45.449596+0000 osd.1 (osd.1) 77 : cluster [DBG] 4.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 707402 data_alloc: 218103808 data_used: 4617
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xe5de7/0x170000, compress 0x0/0x0/0x0, omap 0xbd29, meta 0x2bc42d7), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:16.690036+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:17.690157+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:18.690266+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:48.490980+0000 osd.1 (osd.1) 78 : cluster [DBG] 2.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:48.501547+0000 osd.1 (osd.1) 79 : cluster [DBG] 2.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 84 handle_osd_map epochs [85,87], i have 84, src has [1,87]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=0 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=0 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000041
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000785 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000249 1 0.001136
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001661 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 87 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 79)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:48.490980+0000 osd.1 (osd.1) 78 : cluster [DBG] 2.d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:48.501547+0000 osd.1 (osd.1) 79 : cluster [DBG] 2.d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 2744320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:19.690614+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 87 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003131 2 0.001444
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.004853 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005914 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000044 1 0.000070
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 88 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 2875392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fceac000/0x0/0x4ffc00000, data 0xeca2d/0x17c000, compress 0x0/0x0/0x0, omap 0xc23b, meta 0x2bc3dc5), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:20.690756+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.002741 6 0.000029
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001496 3 0.000122
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000029 1 0.000043
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028640 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 2850816 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 734458 data_alloc: 218103808 data_used: 5202
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:21.690912+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.030361176s of 10.050309181s, submitted: 56
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.983411 1 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.013683 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.016474 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000327 1 0.000404
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000085 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000189
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000777 3 0.000077
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 2842624 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:22.691059+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002275 2 0.000062
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003179 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001043 3 0.000089
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=90/91 n=6 ec=47/37 lis/c=90/54 les/c/f=91/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 1744896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:23.691193+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:53.585766+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:53.596376+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 81)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:53.585766+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:53.596376+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 1744896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcea1000/0x0/0x4ffc00000, data 0xf3437/0x189000, compress 0x0/0x0/0x0, omap 0xcc6b, meta 0x2bc3395), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:24.691351+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 1736704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:25.691483+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 1728512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745041 data_alloc: 218103808 data_used: 5202
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:26.691626+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:56.639028+0000 osd.1 (osd.1) 82 : cluster [DBG] 6.b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:56.653278+0000 osd.1 (osd.1) 83 : cluster [DBG] 6.b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 83)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:56.639028+0000 osd.1 (osd.1) 82 : cluster [DBG] 6.b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:56.653278+0000 osd.1 (osd.1) 83 : cluster [DBG] 6.b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 92 handle_osd_map epochs [93,94], i have 92, src has [1,94]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 1720320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:27.691792+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 1712128 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:28.691922+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:58.659693+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:58.670284+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 85)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:58.659693+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:58.670284+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 1662976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 94 handle_osd_map epochs [95,96], i have 94, src has [1,96]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:29.692119+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:59.650932+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.8 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:16:59.661527+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.8 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 87)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:59.650932+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.8 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:16:59.661527+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.8 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xfbc19/0x198000, compress 0x0/0x0/0x0, omap 0xd3ef, meta 0x2bc2c11), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:30.692305+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:00.641898+0000 osd.1 (osd.1) 88 : cluster [DBG] 5.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:00.652615+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 89)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:00.641898+0000 osd.1 (osd.1) 88 : cluster [DBG] 5.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:00.652615+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766127 data_alloc: 218103808 data_used: 6657
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:31.692494+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.027341843s of 10.044580460s, submitted: 44
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 1597440 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:32.692677+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:33.692828+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:34.692961+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:04.661842+0000 osd.1 (osd.1) 90 : cluster [DBG] 6.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:04.672417+0000 osd.1 (osd.1) 91 : cluster [DBG] 6.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 91)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:04.661842+0000 osd.1 (osd.1) 90 : cluster [DBG] 6.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:04.672417+0000 osd.1 (osd.1) 91 : cluster [DBG] 6.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 2564096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fce86000/0x0/0x4ffc00000, data 0x102857/0x1a4000, compress 0x0/0x0/0x0, omap 0xde39, meta 0x2bc21c7), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:35.693103+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782436 data_alloc: 218103808 data_used: 6657
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:36.693261+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:37.693397+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce81000/0x0/0x4ffc00000, data 0x1042a6/0x1a7000, compress 0x0/0x0/0x0, omap 0xe0a1, meta 0x2bc1f5f), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:38.693536+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:39.693710+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:40.693839+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 2539520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782436 data_alloc: 218103808 data_used: 6657
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce81000/0x0/0x4ffc00000, data 0x1042a6/0x1a7000, compress 0x0/0x0/0x0, omap 0xe0a1, meta 0x2bc1f5f), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:41.694006+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.514952660s of 10.520758629s, submitted: 7
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:42.694163+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 2514944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:43.694273+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:12.761842+0000 osd.1 (osd.1) 92 : cluster [DBG] 4.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:12.772285+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 2498560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 103 handle_osd_map epochs [103,104], i have 104, src has [1,104]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 93)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:12.761842+0000 osd.1 (osd.1) 92 : cluster [DBG] 4.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:12.772285+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:44.694471+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 2744320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:45.694602+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 2744320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795945 data_alloc: 218103808 data_used: 6657
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0x10b01c/0x1b3000, compress 0x0/0x0/0x0, omap 0xeafd, meta 0x2bc1503), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0x10b01c/0x1b3000, compress 0x0/0x0/0x0, omap 0xeafd, meta 0x2bc1503), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:46.694704+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 2686976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0x10cbd5/0x1b6000, compress 0x0/0x0/0x0, omap 0xed67, meta 0x2bc1299), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:47.694821+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 2686976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:48.694960+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 2678784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 106 handle_osd_map epochs [107,108], i have 106, src has [1,108]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0x10cbd5/0x1b6000, compress 0x0/0x0/0x0, omap 0xed67, meta 0x2bc1299), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:49.695114+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:50.695262+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 2785280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810967 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:51.695462+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 2785280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:52.695641+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 2785280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:53.695800+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 2777088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.081542015s of 12.090872765s, submitted: 13
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f(unlocked)] enter Initial
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=0 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=0 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000024
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000495 1 0.000032
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000552 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:54.695935+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 2711552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0x11510d/0x1c5000, compress 0x0/0x0/0x0, omap 0xf772, meta 0x2bc088e), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.005684 2 0.000071
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.006259 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.006277 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=111) [1] r=0 lpr=111 pi=[67,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000071 1 0.000098
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:55.696081+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 2703360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818978 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:56.696222+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:25.711603+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.15 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:25.722189+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.15 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.346244 5 0.000038
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=67/67 les/c/f=68/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001015 4 0.000607
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000047 1 0.000047
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039869 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 113 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fce62000/0x0/0x4ffc00000, data 0x116b8e/0x1c8000, compress 0x0/0x0/0x0, omap 0xfa41, meta 0x2bc05bf), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 2703360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 95)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:25.711603+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.15 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:25.722189+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.15 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.624326 1 0.000022
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.665343 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.011754 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[67,112)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000063
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000025
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 07:36:29 compute-0 ceph-osd[86142]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001107 3 0.000026
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 114 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:57.696396+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:26.745887+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:26.759833+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 2695168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fce57000/0x0/0x4ffc00000, data 0x11a0c8/0x1cf000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 114 handle_osd_map epochs [114,115], i have 115, src has [1,115]
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 97)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:26.745887+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:26.759833+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999701 2 0.000060
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000870 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=112/113 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=112/67 les/c/f=113/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001260 4 0.000364
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 pg_epoch: 115 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=114/115 n=6 ec=47/37 lis/c=114/67 les/c/f=115/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:58.696610+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 1 last_log 98 sent 97 num 1 unsent 1 sending 1
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:28.687545+0000 osd.1 (osd.1) 98 : cluster [DBG] 4.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 2678784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 98)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:28.687545+0000 osd.1 (osd.1) 98 : cluster [DBG] 4.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:59.696930+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 1 last_log 99 sent 98 num 1 unsent 1 sending 1
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:28.697287+0000 osd.1 (osd.1) 99 : cluster [DBG] 4.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 99)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:28.697287+0000 osd.1 (osd.1) 99 : cluster [DBG] 4.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:00.697075+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840035 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:01.697217+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 2670592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _renew_subs
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:02.697367+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 2662400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:03.697517+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 2662400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:04.697676+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 2654208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:05.697797+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 2654208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840035 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.232923508s of 12.249648094s, submitted: 28
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:06.698510+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:36.640809+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:36.650965+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 2654208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 101)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:36.640809+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:36.650965+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:07.698695+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:37.619225+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:37.629704+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 2646016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 103)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:37.619225+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:37.629704+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:08.698876+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:38.664528+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.11 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:38.675080+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.11 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 2646016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 105)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:38.664528+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.11 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:38.675080+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.11 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:09.699013+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:10.699152+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:39.708601+0000 osd.1 (osd.1) 106 : cluster [DBG] 6.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:39.719197+0000 osd.1 (osd.1) 107 : cluster [DBG] 6.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848965 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 107)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:39.708601+0000 osd.1 (osd.1) 106 : cluster [DBG] 6.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:39.719197+0000 osd.1 (osd.1) 107 : cluster [DBG] 6.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:11.699321+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 2637824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:12.699478+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 2629632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:13.699615+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 2629632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:14.699755+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 2629632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:15.699891+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 2621440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848965 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:16.700006+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 2613248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:17.700139+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79175680 unmapped: 2605056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:18.700276+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79175680 unmapped: 2605056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:19.700403+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.060271263s of 13.066132545s, submitted: 8
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 2596864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:20.700547+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:49.706989+0000 osd.1 (osd.1) 108 : cluster [DBG] 6.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:49.721400+0000 osd.1 (osd.1) 109 : cluster [DBG] 6.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 2596864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851378 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 109)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:49.706989+0000 osd.1 (osd.1) 108 : cluster [DBG] 6.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:49.721400+0000 osd.1 (osd.1) 109 : cluster [DBG] 6.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:21.700724+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 2588672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:22.700879+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 2588672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:23.700988+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 2588672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:24.701139+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:54.615478+0000 osd.1 (osd.1) 110 : cluster [DBG] 6.1c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:54.629680+0000 osd.1 (osd.1) 111 : cluster [DBG] 6.1c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79208448 unmapped: 2572288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 111)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:54.615478+0000 osd.1 (osd.1) 110 : cluster [DBG] 6.1c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:54.629680+0000 osd.1 (osd.1) 111 : cluster [DBG] 6.1c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:25.701315+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:55.574957+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.1b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:55.585645+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.1b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 2555904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856204 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 113)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:55.574957+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.1b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:55.585645+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.1b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:26.701475+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:56.540422+0000 osd.1 (osd.1) 114 : cluster [DBG] 6.e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:56.554586+0000 osd.1 (osd.1) 115 : cluster [DBG] 6.e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 115)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:56.540422+0000 osd.1 (osd.1) 114 : cluster [DBG] 6.e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:56.554586+0000 osd.1 (osd.1) 115 : cluster [DBG] 6.e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:27.701628+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 2547712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:28.701781+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:58.481356+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:17:58.491969+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 2539520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 117)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:58.481356+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:17:58.491969+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:29.701957+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 2539520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.775984764s of 10.782183647s, submitted: 10
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:30.702062+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:00.489177+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.3 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:00.499790+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.3 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863437 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 119)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:00.489177+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.3 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:00.499790+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.3 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:31.702213+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:01.439949+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:01.450621+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 121)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:01.439949+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:01.450621+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:32.702350+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 2523136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:33.702473+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 2514944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:34.702576+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79265792 unmapped: 2514944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:35.702690+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:05.507046+0000 osd.1 (osd.1) 122 : cluster [DBG] 4.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:05.517641+0000 osd.1 (osd.1) 123 : cluster [DBG] 4.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 2506752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868261 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 123)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:05.507046+0000 osd.1 (osd.1) 122 : cluster [DBG] 4.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:05.517641+0000 osd.1 (osd.1) 123 : cluster [DBG] 4.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:36.702865+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:06.478145+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:06.488815+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 2498560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 125)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:06.478145+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:06.488815+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:37.703028+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:07.476033+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:07.486594+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 2482176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 127)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:07.476033+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:07.486594+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:38.703207+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 2465792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:39.703350+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 2465792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:40.703493+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:10.436177+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:10.446757+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 2457600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875494 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 129)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:10.436177+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:10.446757+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:41.703682+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 2457600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:42.703829+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 2449408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:43.703928+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 2449408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:44.704046+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 2449408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:45.704154+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 2441216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875494 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:46.704302+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 2441216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:47.704468+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 2433024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.874313354s of 17.881334305s, submitted: 12
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:48.704599+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:18.370562+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:18.381214+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 2433024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 131)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:18.370562+0000 osd.1 (osd.1) 130 : cluster [DBG] 2.9 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:18.381214+0000 osd.1 (osd.1) 131 : cluster [DBG] 2.9 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:49.704885+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:19.328844+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:19.339427+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 2433024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 133)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:19.328844+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:19.339427+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:50.705046+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:20.332012+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:20.342547+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 2408448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882727 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 135)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:20.332012+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:20.342547+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:51.705242+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:21.354308+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:21.364896+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 2408448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 137)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:21.354308+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:21.364896+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:52.705392+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:22.393644+0000 osd.1 (osd.1) 138 : cluster [DBG] 5.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:22.404217+0000 osd.1 (osd.1) 139 : cluster [DBG] 5.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 2400256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:53.705554+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 4 last_log 141 sent 139 num 4 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:23.347171+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.18 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:23.357748+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.18 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 139)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:22.393644+0000 osd.1 (osd.1) 138 : cluster [DBG] 5.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:22.404217+0000 osd.1 (osd.1) 139 : cluster [DBG] 5.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 2392064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:54.705716+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 4 last_log 143 sent 141 num 4 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:24.338359+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:24.348948+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 141)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:23.347171+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.18 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:23.357748+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.18 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 143)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:24.338359+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:24.348948+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 2383872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:55.705893+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:25.319685+0000 osd.1 (osd.1) 144 : cluster [DBG] 6.1e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:25.330268+0000 osd.1 (osd.1) 145 : cluster [DBG] 6.1e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 145)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:25.319685+0000 osd.1 (osd.1) 144 : cluster [DBG] 6.1e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:25.330268+0000 osd.1 (osd.1) 145 : cluster [DBG] 6.1e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 2367488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894790 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:56.706206+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 2367488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:57.706420+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 2359296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:58.706637+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 2359296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.940398216s of 10.950827599s, submitted: 16
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:59.706808+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:29.321300+0000 osd.1 (osd.1) 146 : cluster [DBG] 11.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:29.331874+0000 osd.1 (osd.1) 147 : cluster [DBG] 11.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 147)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:29.321300+0000 osd.1 (osd.1) 146 : cluster [DBG] 11.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:29.331874+0000 osd.1 (osd.1) 147 : cluster [DBG] 11.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 2334720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:00.707026+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:30.326568+0000 osd.1 (osd.1) 148 : cluster [DBG] 8.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:30.337159+0000 osd.1 (osd.1) 149 : cluster [DBG] 8.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 149)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:30.326568+0000 osd.1 (osd.1) 148 : cluster [DBG] 8.16 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:30.337159+0000 osd.1 (osd.1) 149 : cluster [DBG] 8.16 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 2326528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899618 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:01.707221+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:31.365858+0000 osd.1 (osd.1) 150 : cluster [DBG] 8.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:31.376429+0000 osd.1 (osd.1) 151 : cluster [DBG] 8.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 151)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:31.365858+0000 osd.1 (osd.1) 150 : cluster [DBG] 8.17 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:31.376429+0000 osd.1 (osd.1) 151 : cluster [DBG] 8.17 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 2310144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:02.707527+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 2310144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:03.707704+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 2310144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:04.707881+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:34.434587+0000 osd.1 (osd.1) 152 : cluster [DBG] 11.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:34.445078+0000 osd.1 (osd.1) 153 : cluster [DBG] 11.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 153)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:34.434587+0000 osd.1 (osd.1) 152 : cluster [DBG] 11.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:34.445078+0000 osd.1 (osd.1) 153 : cluster [DBG] 11.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 2301952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:05.708080+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 2301952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904446 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:06.708222+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:36.446345+0000 osd.1 (osd.1) 154 : cluster [DBG] 8.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:36.456925+0000 osd.1 (osd.1) 155 : cluster [DBG] 8.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 155)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:36.446345+0000 osd.1 (osd.1) 154 : cluster [DBG] 8.1 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:36.456925+0000 osd.1 (osd.1) 155 : cluster [DBG] 8.1 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 2301952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:07.708418+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 2293760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:08.708550+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 2293760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:09.708682+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 2285568 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.080490112s of 11.087811470s, submitted: 10
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:10.708818+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:40.408688+0000 osd.1 (osd.1) 156 : cluster [DBG] 11.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:40.419275+0000 osd.1 (osd.1) 157 : cluster [DBG] 11.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 157)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:40.408688+0000 osd.1 (osd.1) 156 : cluster [DBG] 11.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:40.419275+0000 osd.1 (osd.1) 157 : cluster [DBG] 11.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 2269184 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909270 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:11.708972+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:41.417753+0000 osd.1 (osd.1) 158 : cluster [DBG] 8.3 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:41.428373+0000 osd.1 (osd.1) 159 : cluster [DBG] 8.3 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 159)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:41.417753+0000 osd.1 (osd.1) 158 : cluster [DBG] 8.3 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:41.428373+0000 osd.1 (osd.1) 159 : cluster [DBG] 8.3 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 2260992 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:12.709147+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:42.448411+0000 osd.1 (osd.1) 160 : cluster [DBG] 8.8 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:42.459010+0000 osd.1 (osd.1) 161 : cluster [DBG] 8.8 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 161)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:42.448411+0000 osd.1 (osd.1) 160 : cluster [DBG] 8.8 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:42.459010+0000 osd.1 (osd.1) 161 : cluster [DBG] 8.8 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 2260992 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:13.709295+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:43.415747+0000 osd.1 (osd.1) 162 : cluster [DBG] 8.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:43.426344+0000 osd.1 (osd.1) 163 : cluster [DBG] 8.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 163)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:43.415747+0000 osd.1 (osd.1) 162 : cluster [DBG] 8.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:43.426344+0000 osd.1 (osd.1) 163 : cluster [DBG] 8.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 2252800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:14.709477+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 2252800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:15.709574+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 2244608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916503 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:16.709702+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 2244608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:17.709807+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 2244608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:18.709899+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:48.312741+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:48.326904+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 2236416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 165)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:48.312741+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.c scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:48.326904+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.c scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:19.710018+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 2236416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:20.710119+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 2220032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918916 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:21.710213+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 2220032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:22.710347+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815450668s of 12.822014809s, submitted: 10
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 2211840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:23.710499+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:53.231208+0000 osd.1 (osd.1) 166 : cluster [DBG] 11.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:53.241799+0000 osd.1 (osd.1) 167 : cluster [DBG] 11.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 2203648 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 167)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:53.231208+0000 osd.1 (osd.1) 166 : cluster [DBG] 11.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:53.241799+0000 osd.1 (osd.1) 167 : cluster [DBG] 11.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:24.710692+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 2195456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:25.710803+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:55.281954+0000 osd.1 (osd.1) 168 : cluster [DBG] 8.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:55.292535+0000 osd.1 (osd.1) 169 : cluster [DBG] 8.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 2187264 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923740 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 169)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:55.281954+0000 osd.1 (osd.1) 168 : cluster [DBG] 8.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:55.292535+0000 osd.1 (osd.1) 169 : cluster [DBG] 8.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:26.710964+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 2187264 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:27.711086+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:57.339395+0000 osd.1 (osd.1) 170 : cluster [DBG] 8.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:57.349971+0000 osd.1 (osd.1) 171 : cluster [DBG] 8.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 2179072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 171)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:57.339395+0000 osd.1 (osd.1) 170 : cluster [DBG] 8.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:57.349971+0000 osd.1 (osd.1) 171 : cluster [DBG] 8.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:28.711265+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:58.371794+0000 osd.1 (osd.1) 172 : cluster [DBG] 11.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:18:58.382362+0000 osd.1 (osd.1) 173 : cluster [DBG] 11.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 2179072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 173)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:58.371794+0000 osd.1 (osd.1) 172 : cluster [DBG] 11.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:18:58.382362+0000 osd.1 (osd.1) 173 : cluster [DBG] 11.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:29.711428+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 2170880 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:30.711568+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 2162688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928564 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:31.711680+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 2162688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:32.711801+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:02.390844+0000 osd.1 (osd.1) 174 : cluster [DBG] 8.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:02.401420+0000 osd.1 (osd.1) 175 : cluster [DBG] 8.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 2154496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 175)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:02.390844+0000 osd.1 (osd.1) 174 : cluster [DBG] 8.5 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:02.401420+0000 osd.1 (osd.1) 175 : cluster [DBG] 8.5 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.193034172s of 10.200374603s, submitted: 10
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:33.711948+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:03.431604+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:03.442196+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 2154496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 177)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:03.431604+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.7 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:03.442196+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.7 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:34.712127+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 2146304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:35.712254+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:05.422745+0000 osd.1 (osd.1) 178 : cluster [DBG] 8.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:05.436878+0000 osd.1 (osd.1) 179 : cluster [DBG] 8.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 2146304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935801 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 179)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:05.422745+0000 osd.1 (osd.1) 178 : cluster [DBG] 8.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:05.436878+0000 osd.1 (osd.1) 179 : cluster [DBG] 8.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:36.712397+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 2146304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:37.712517+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 2138112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:38.712661+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 2138112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:39.712803+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 2129920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:40.712911+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 2129920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938216 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:41.713066+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:11.325725+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:11.336338+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 2113536 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 181)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:11.325725+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.1d scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:11.336338+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.1d scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:42.713225+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:12.359517+0000 osd.1 (osd.1) 182 : cluster [DBG] 8.1e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:12.370061+0000 osd.1 (osd.1) 183 : cluster [DBG] 8.1e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 2088960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 183)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:12.359517+0000 osd.1 (osd.1) 182 : cluster [DBG] 8.1e scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:12.370061+0000 osd.1 (osd.1) 183 : cluster [DBG] 8.1e scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:43.713414+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 2088960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:44.713538+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 2080768 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:45.713649+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 2072576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940629 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.985782623s of 12.992125511s, submitted: 8
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:46.713753+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:16.423788+0000 osd.1 (osd.1) 184 : cluster [DBG] 8.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:16.434412+0000 osd.1 (osd.1) 185 : cluster [DBG] 8.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 2064384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 185)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:16.423788+0000 osd.1 (osd.1) 184 : cluster [DBG] 8.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:16.434412+0000 osd.1 (osd.1) 185 : cluster [DBG] 8.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:47.713895+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 2064384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:48.714007+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 2048000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:49.714125+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 2048000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:50.714238+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 2048000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943042 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:51.714385+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:21.374838+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:21.385399+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 2039808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 187)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:21.374838+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:21.385399+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:52.714569+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 2039808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:53.714681+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 2031616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:54.714792+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:24.407716+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:24.418299+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 2031616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 189)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:24.407716+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.13 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:24.418299+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.13 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:55.714966+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 2031616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947872 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:56.715140+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 2023424 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.039779663s of 11.043544769s, submitted: 6
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:57.715302+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:27.467063+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.11 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:27.477666+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.11 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 2023424 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 191)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:27.467063+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.11 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:27.477666+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.11 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:58.715475+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 2015232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:59.715628+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 2015232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:00.715771+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 2015232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950287 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:01.715891+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 2007040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:02.715999+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 2007040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:03.716146+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 1990656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:04.716271+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 1990656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:05.716414+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:35.429934+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:35.440488+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 1990656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955117 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 193)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:35.429934+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:35.440488+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:06.716627+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:36.401914+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:36.412478+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 1966080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 195)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:36.401914+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.19 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:36.412478+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.19 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:07.716804+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 1966080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.985705376s of 10.989761353s, submitted: 6
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:08.716949+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:38.457123+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:38.467690+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 1957888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 197)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:38.457123+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.6 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:38.467690+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.6 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:09.717105+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 1957888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:10.717216+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 1957888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957530 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:11.717322+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 1949696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:12.717485+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 1949696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:13.717596+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 1941504 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:14.717710+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:44.453364+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:44.463946+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 1925120 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 199)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:44.453364+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:44.463946+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:15.717902+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 1925120 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959943 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:16.718018+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:46.433104+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:46.443645+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 1916928 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 201)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:46.433104+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:46.443645+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:17.718175+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 1916928 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:18.718331+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 1908736 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:19.718491+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.905800819s of 11.910030365s, submitted: 6
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 1908736 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:20.718619+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:50.366882+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:50.377427+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967184 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 203)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:50.366882+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.b scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:50.377427+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.b scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:21.718749+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:51.328100+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:51.342245+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 205)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:51.328100+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:51.342245+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:22.718907+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 1884160 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:23.719026+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 1875968 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:24.719134+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 1875968 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:25.719289+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:55.237818+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:55.251902+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 1867776 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969599 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 207)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:55.237818+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:55.251902+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:26.719429+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 1867776 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:27.719537+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 1859584 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:28.719649+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:58.248402+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.15 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:19:58.273137+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.15 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 1859584 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 209)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:58.248402+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.15 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:19:58.273137+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.15 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:29.719797+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 1859584 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:30.719912+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 1851392 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972012 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:31.720051+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.826974869s of 11.832626343s, submitted: 8
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 1835008 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:32.720153+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:02.199808+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:02.238656+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 1835008 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 211)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:02.199808+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.14 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:02.238656+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.14 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:33.720304+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 1818624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:34.720414+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1810432 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:35.720515+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 1802240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976836 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:36.720646+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:06.215138+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:06.264602+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 1802240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 213)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:06.215138+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.0 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:06.264602+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.0 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:37.720901+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 1802240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:38.721085+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:08.235749+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:08.278177+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1777664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 215)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:08.235749+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.2 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:08.278177+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.2 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:39.721892+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1777664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:40.722038+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1761280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981658 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:41.722196+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:11.299837+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:11.342181+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.098447800s of 10.105239868s, submitted: 8
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1761280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 217)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:11.299837+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:11.342181+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:42.723967+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:12.305079+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:12.350963+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1761280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 219)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:12.305079+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.4 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:12.350963+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.4 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:43.724211+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 221 sent 219 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:13.286790+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:13.311523+0000 osd.1 (osd.1) 221 : cluster [DBG] 9.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1753088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 221)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:13.286790+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.1a scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:13.311523+0000 osd.1 (osd.1) 221 : cluster [DBG] 9.1a scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:44.724529+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1753088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:45.724704+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1744896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986482 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:46.724850+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1744896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:47.725658+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 223 sent 221 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:17.280082+0000 osd.1 (osd.1) 222 : cluster [DBG] 9.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:17.308328+0000 osd.1 (osd.1) 223 : cluster [DBG] 9.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1712128 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 223)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:17.280082+0000 osd.1 (osd.1) 222 : cluster [DBG] 9.12 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:17.308328+0000 osd.1 (osd.1) 223 : cluster [DBG] 9.12 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:48.725801+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 225 sent 223 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:18.286701+0000 osd.1 (osd.1) 224 : cluster [DBG] 9.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:18.304343+0000 osd.1 (osd.1) 225 : cluster [DBG] 9.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1703936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 225)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:18.286701+0000 osd.1 (osd.1) 224 : cluster [DBG] 9.10 scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:18.304343+0000 osd.1 (osd.1) 225 : cluster [DBG] 9.10 scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:49.725975+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  log_queue is 2 last_log 227 sent 225 num 2 unsent 2 sending 2
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:19.307276+0000 osd.1 (osd.1) 226 : cluster [DBG] 9.1f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  will send 2025-12-13T07:20:19.335533+0000 osd.1 (osd.1) 227 : cluster [DBG] 9.1f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1703936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client handle_log_ack log(last 227)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:19.307276+0000 osd.1 (osd.1) 226 : cluster [DBG] 9.1f scrub starts
Dec 13 07:36:29 compute-0 ceph-osd[86142]: log_client  logged 2025-12-13T07:20:19.335533+0000 osd.1 (osd.1) 227 : cluster [DBG] 9.1f scrub ok
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:50.726135+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1695744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:51.726241+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1695744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:52.726368+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 1687552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:53.726468+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 1687552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:54.726576+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 1687552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:55.726693+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1679360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:56.726804+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1679360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:57.726913+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1679360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:58.727039+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 1671168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:59.727195+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1662976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:00.727307+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1654784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:01.727419+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1654784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:02.727544+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1646592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:03.727653+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1646592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:04.727781+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1646592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:05.727890+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1638400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:06.727993+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1638400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:07.728108+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 1630208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:08.728206+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1622016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:09.728324+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1622016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:10.728426+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1613824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:11.728553+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1613824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:12.728655+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1605632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:13.728770+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1605632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:14.728876+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1597440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:15.728991+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1597440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:16.729121+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1597440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:17.729259+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1589248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:18.729390+0000)
Dec 13 07:36:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 13 07:36:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826344808' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1589248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:19.729545+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1581056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:20.729685+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1572864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:21.729825+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1572864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:22.729933+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1564672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:23.730035+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1564672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:24.730131+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1556480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:25.730251+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1556480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:26.730356+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1548288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:27.730473+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1548288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:28.730588+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1548288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:29.730754+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1540096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:30.730891+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1540096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:31.731028+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1540096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:32.731167+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1531904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:33.731272+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 1523712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:34.731384+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1515520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:35.731559+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1515520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:36.731702+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1515520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:37.731831+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1507328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:38.731983+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1507328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:39.732171+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1499136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:40.732334+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1499136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:41.732488+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1499136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:42.732678+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1490944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:43.732867+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1490944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:44.732979+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1474560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:45.733141+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1474560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:46.733238+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1474560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:47.733341+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1466368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:48.733467+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1466368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:49.733629+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1458176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:50.733767+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1458176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:51.733901+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1458176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:52.734032+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1449984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:53.734177+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1449984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:54.734289+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1449984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:55.734409+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1441792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:56.734516+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1441792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:57.734626+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1433600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:58.734736+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1433600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:59.734868+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1433600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:00.734971+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1425408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:01.735084+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1425408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:02.735213+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1417216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:03.735323+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1417216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:04.735426+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1400832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:05.735535+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1392640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:06.735631+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1392640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:07.735727+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1384448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:08.735828+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1384448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:09.735945+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1384448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:10.736054+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 1376256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:11.736159+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:12.736295+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:13.736454+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:14.736562+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:15.736654+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:16.736785+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:17.736919+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:18.737027+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:19.737159+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:20.737292+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:21.737390+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:22.737465+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1335296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:23.737562+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1335296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:24.737687+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:25.737824+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:26.737969+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:27.738125+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:28.738267+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:29.738394+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:30.738508+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1310720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:31.738649+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:32.738758+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:33.738904+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:34.739011+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:35.739135+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:36.739270+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:37.739400+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:38.739483+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1277952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:39.739615+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1269760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:40.739744+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:41.739868+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:42.739995+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1368064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:43.740116+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:44.740236+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1359872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:45.740388+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:46.740674+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:47.740783+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1351680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:48.740905+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:49.741027+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1343488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:50.741167+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1335296 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:51.741309+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:52.741449+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1327104 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:53.741561+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:54.741930+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1318912 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:55.742057+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1310720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:56.742197+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1310720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:57.742333+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:58.742475+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:59.742630+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1302528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:00.742752+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:01.743036+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:02.743429+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1294336 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:03.743558+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:04.743663+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1286144 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:05.743781+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1277952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:06.743891+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1277952 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:07.744079+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1269760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:08.744212+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1269760 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:09.744386+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1261568 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:10.744535+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1253376 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:11.744666+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1253376 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:12.744780+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1253376 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:13.744919+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1245184 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:14.745029+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1245184 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:15.746205+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1236992 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:16.746336+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1228800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:17.746505+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1228800 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:18.746654+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1220608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:19.746848+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1220608 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:20.747038+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1212416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:21.747179+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1212416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:22.747385+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1212416 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:23.747575+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1204224 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:24.747710+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1204224 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:25.747916+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1196032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:26.748077+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1196032 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:27.748190+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1187840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:28.748311+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1187840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:29.748483+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1187840 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:30.748580+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1179648 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:31.748678+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1179648 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:32.748819+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1171456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:33.748920+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1171456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:34.749023+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1171456 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:35.749153+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1155072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:36.749254+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1155072 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:37.749383+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 1146880 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:38.749512+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 1146880 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:39.749658+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1138688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:40.749790+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1138688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:41.749889+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1138688 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:42.749993+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1130496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:43.750100+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1130496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:44.750193+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1130496 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:45.750293+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 1122304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:46.750396+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 1122304 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:47.750506+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1114112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:48.750625+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1114112 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:49.750765+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1105920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:50.750872+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1105920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:51.750975+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1105920 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:52.751188+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1097728 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:53.751293+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1097728 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:54.751389+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1097728 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:55.751505+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1089536 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:56.751622+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1089536 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:57.751765+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1081344 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:58.751887+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1081344 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:59.752014+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1073152 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:00.752137+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1064960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:01.752268+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1064960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:02.752377+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1056768 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:03.752483+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1056768 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:04.752588+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1048576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:05.752690+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1048576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:06.752799+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1048576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:07.752903+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:08.753019+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:09.753146+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:10.753281+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:11.753383+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:12.753480+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:13.753619+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:14.753795+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:15.753904+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:16.754004+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:17.754106+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:18.754206+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:19.754333+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:20.754429+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 991232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:21.754545+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 991232 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:22.754651+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 983040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:23.754753+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 983040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:24.754850+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 983040 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:25.754956+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 974848 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:26.755065+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 974848 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:27.755160+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 966656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:28.755257+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 966656 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:29.755382+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 958464 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:30.755476+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 958464 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:31.755575+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 958464 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:32.755676+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 950272 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:33.755790+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 950272 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:34.755875+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:35.755963+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:36.756072+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:37.756181+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 942080 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:38.756290+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 933888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:39.756431+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 933888 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:40.756593+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 925696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:41.756693+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 925696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:42.756790+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 925696 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:43.756918+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 917504 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:44.757036+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 917504 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6995 writes, 28K keys, 6995 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s
                                           Interval WAL: 6995 writes, 1406 syncs, 4.98 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:45.757158+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 802816 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:46.757250+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 802816 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:47.757405+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 794624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:48.757509+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 794624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:49.757649+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 794624 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:50.757757+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 786432 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:51.757890+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 786432 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:52.758046+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 778240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:53.758167+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 778240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:54.758294+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 778240 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:55.758427+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 770048 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:56.758622+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 770048 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:57.758768+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 770048 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:58.758894+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 761856 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:59.759036+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 761856 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:00.759149+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 753664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:01.759244+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 753664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:02.759381+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 753664 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:03.759546+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 745472 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:04.759680+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 745472 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:05.759838+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 737280 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:06.759996+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 729088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:07.760132+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 729088 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:08.760287+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 720896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:09.760502+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 720896 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:10.760594+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 712704 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:11.760694+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 712704 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:12.760794+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 712704 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:13.760925+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 704512 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:14.761058+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 704512 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:15.761172+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 696320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:16.761279+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 696320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:17.761365+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 696320 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:18.761483+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 688128 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:19.761616+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 688128 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:20.761749+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:21.761862+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:22.761991+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:23.762147+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 679936 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:24.762244+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:25.762368+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:26.762523+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:27.762669+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 671744 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:28.762869+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 663552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:29.763091+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 663552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:30.763223+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 663552 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:31.763356+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:32.763519+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:33.763679+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:34.763811+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:35.763949+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 293.603179932s of 293.610778809s, submitted: 10
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 606208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:36.764088+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:37.764201+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:38.764335+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:39.764486+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:40.764582+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:41.764679+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:42.764768+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:43.764868+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 655360 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:44.764965+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:45.765068+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 647168 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:46.765188+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 638976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:47.765290+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 638976 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:48.765403+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 630784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:49.765553+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 630784 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:50.765654+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 622592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:51.765753+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 622592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:52.765854+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 622592 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:53.765958+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 614400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:54.766057+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 614400 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:55.766169+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 606208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:56.766282+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 606208 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:57.766412+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 598016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:58.766544+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 598016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:59.766655+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 598016 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:00.766753+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 589824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:01.766875+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 589824 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:02.767012+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 581632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:03.767148+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 581632 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:04.767268+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 573440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:05.767354+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 573440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:06.767537+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 573440 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:07.767648+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 565248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:08.767763+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 565248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:09.767894+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 565248 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:10.768000+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:11.768139+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:12.768256+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:13.768362+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:14.768476+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:15.768577+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 540672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:16.768680+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 540672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:17.768787+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 532480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:18.768886+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 532480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:19.769004+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 532480 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:20.769110+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 524288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:21.769277+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 524288 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:22.769387+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:23.769484+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:24.769596+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:25.769692+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 507904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:26.769795+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 507904 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:27.769923+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 499712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:28.770030+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 499712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:29.770192+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 499712 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:30.770328+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:31.770471+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:32.770562+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:33.770669+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:34.770770+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:35.770876+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:36.770968+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:37.771123+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:38.771257+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 466944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:39.771377+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 466944 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:40.771484+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 458752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:41.771602+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 458752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:42.771730+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 458752 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:43.771851+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 450560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:44.771951+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 450560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:45.772043+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:46.772134+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:47.772356+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:48.772459+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:49.772569+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:50.772668+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:51.772762+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:52.772878+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:53.772976+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:54.773073+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:55.773173+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:56.773271+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:57.773377+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:58.773470+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:59.773594+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:00.773702+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:01.773795+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:02.773901+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:03.773998+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:04.774093+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:05.774187+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:06.774275+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:07.774376+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:08.774507+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:09.774614+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:10.774714+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:11.774815+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:12.774882+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:13.774988+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:14.775097+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:15.775205+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:16.775307+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:17.775410+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:18.775530+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:19.775719+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:20.775833+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:21.775959+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:22.776052+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:23.776157+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:24.776266+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:25.776379+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:26.776503+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:27.776609+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:28.776716+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:29.776836+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:30.776940+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:31.777039+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:32.777146+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:33.777238+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 442368 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:34.777338+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:35.777486+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:36.777593+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:37.777718+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:38.777843+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:39.778013+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:40.778117+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:41.778232+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:42.778321+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:43.778429+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:44.778547+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:45.778644+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:46.778742+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:47.778840+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:48.778931+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:49.779059+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:50.779185+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:51.779286+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:52.779416+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:53.779487+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:54.779617+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:55.779718+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:56.779847+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:57.779944+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:58.780046+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:59.780154+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:00.780259+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:01.780353+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:02.780463+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:03.780569+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:04.780658+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:05.780775+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:06.780888+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:07.780990+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:08.781127+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:09.781249+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:10.781343+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:11.781487+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:12.781589+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 425984 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:13.781695+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:14.781799+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:15.781945+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:16.782055+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:17.782169+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:18.782272+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:19.782408+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:20.782521+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:21.782633+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:22.782764+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:23.782878+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:24.782985+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:25.783106+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:26.783201+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:27.783300+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:28.783399+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:29.783480+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:30.783617+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:31.783730+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:32.783835+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:33.783948+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:34.784109+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:35.784219+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:36.784334+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:37.784462+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:38.784601+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:39.784762+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:40.784876+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:41.784981+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:42.785093+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:43.785204+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:44.785299+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:45.785427+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:46.785485+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:47.785585+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:48.785679+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:49.785789+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:50.785880+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:51.785971+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:52.786065+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:53.786160+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:54.786263+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:55.786364+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:56.786461+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:57.786555+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:58.786658+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:59.786784+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:00.786907+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:01.787014+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:02.787120+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:03.787222+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:04.787321+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:05.787416+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:06.787473+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:07.787570+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:08.787690+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:09.787836+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:10.787960+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:11.788085+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:12.788187+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:13.788285+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:14.788394+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:15.788468+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:16.788560+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:17.788649+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:18.788738+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:19.788851+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:20.788948+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:21.789047+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:22.789166+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:23.789273+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:24.789367+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:25.789481+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:26.789703+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:27.789799+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:28.789868+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:29.789954+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:30.789996+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:31.790094+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:32.790196+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:33.790297+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:34.790408+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:35.790515+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:36.790614+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:37.790715+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:38.790812+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:39.790930+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:40.791037+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:41.791135+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:42.791246+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:43.791349+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:44.791472+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:45.791580+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:46.791720+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:47.793235+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 393216 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:48.793358+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:49.793511+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:50.793621+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:51.793719+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 385024 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:52.793852+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 ms_handle_reset con 0x560fe06e3c00 session 0x560fe01b0c40
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: handle_auth_request added challenge on 0x560fe1fee000
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 ms_handle_reset con 0x560fe06e3400 session 0x560fe016dc00
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: handle_auth_request added challenge on 0x560fe00b4400
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:53.794011+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:54.794125+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:55.794280+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:56.794413+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:57.794490+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:58.794633+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:59.794797+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:00.794944+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:01.795081+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:02.795200+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:03.795329+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:04.795482+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:05.795615+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:06.795720+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:07.795810+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:08.795933+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:09.796074+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:10.796185+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:11.796301+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:12.796399+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:13.796499+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:14.796603+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:15.796709+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:16.796827+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:17.796936+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:18.797034+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:19.797173+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:20.797273+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:21.797383+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:22.797478+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:23.797620+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:24.797754+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:25.797855+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:26.797985+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:27.798120+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:28.798233+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:29.798408+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:30.798528+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:31.798630+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:32.798730+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:33.798864+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:34.798995+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:35.799087+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.071716309s of 300.105926514s, submitted: 90
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:36.799239+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:37.799354+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:38.799492+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:39.799615+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:40.799716+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:41.799836+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:42.799975+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:43.800108+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:44.800274+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:45.800396+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:46.800530+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:47.800662+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:48.800763+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:49.800880+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:50.800985+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:51.801124+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:52.801226+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:53.801371+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:54.801506+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:55.801652+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:56.801769+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:57.801906+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:58.802019+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:59.802172+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:00.802330+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:01.802468+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:02.802602+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:03.802747+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:04.802854+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:05.802963+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:06.803200+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:07.803481+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:08.803622+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:09.803783+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:10.803915+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:11.804058+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:12.804185+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:13.804319+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:14.804462+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:15.804606+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:16.804708+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:17.804803+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:18.804944+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:19.805064+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:20.805181+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:21.805308+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:22.805404+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:23.805495+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:24.805633+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:25.805751+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:26.805848+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:27.805954+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:28.806047+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:29.806162+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:30.806282+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:31.806407+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 376832 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:32.806465+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 368640 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:33.806590+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:34.806689+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:35.806794+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:36.806900+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:37.807004+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:38.807106+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:39.807256+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:40.807388+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:41.807514+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:42.807612+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:43.807703+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:44.808879+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:45.808979+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:46.809078+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:47.809179+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:48.809306+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:49.809483+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:50.809589+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:51.809690+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:52.809786+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:53.809882+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:54.810011+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:55.810112+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:56.810234+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:57.810336+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:58.810456+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:59.810595+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:00.810714+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:01.810845+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:02.810942+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:03.811071+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:04.811173+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:05.811274+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:06.811380+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:07.811481+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:08.811608+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:09.811765+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:10.811903+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:11.812047+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:12.812141+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:13.812263+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:14.812373+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:15.812503+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:16.812607+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:17.812699+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:18.812795+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:19.812971+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:20.813103+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:21.813248+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:22.813350+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 360448 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:23.813468+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:24.813617+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:25.814297+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:26.814399+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:27.814527+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 352256 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:28.814660+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:29.814766+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:30.814893+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:31.814993+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:32.815119+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:33.815274+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:34.815377+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:35.815503+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:36.815624+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:37.815742+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:38.816105+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:39.816250+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:40.816353+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:41.816477+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:42.816604+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:43.816709+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:44.816813+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:45.816946+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:46.817077+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:47.817183+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:48.817286+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:49.817397+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:50.817466+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:51.817585+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:52.817683+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:53.817808+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:54.817944+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:55.818052+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:56.818147+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:57.818275+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:58.818372+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 344064 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:59.818546+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:00.818714+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:01.818865+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:02.818960+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:03.819073+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:04.819199+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:05.819325+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:06.819470+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:07.819574+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:08.819674+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:09.819846+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:10.820002+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:11.820156+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:12.820247+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:13.820333+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:14.820476+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:15.820597+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:16.820719+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:17.820840+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:18.820926+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:19.821044+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:20.821153+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:21.821245+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:22.821333+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:23.821425+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:24.821545+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:25.821687+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 335872 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:26.821783+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:27.822322+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:28.822427+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:29.822572+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:30.822731+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:31.822844+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:32.822945+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:33.823048+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:34.823158+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:35.823300+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:36.823418+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:37.823492+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:38.823591+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:39.823714+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:40.823846+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:41.823976+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:42.824301+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:43.824410+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:44.824534+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:45.824715+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:46.824868+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:47.825030+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:48.825159+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:49.825334+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:50.825476+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:51.825588+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:52.825690+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:53.825785+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:54.825886+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 327680 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:55.826047+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:56.826190+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:57.826349+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:58.826460+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:59.826573+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:00.826730+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:01.826874+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:02.826976+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:03.827073+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:04.827161+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:05.827259+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:06.827385+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:07.827504+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:08.827602+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:09.827755+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:10.827882+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:11.828003+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:12.828105+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:13.828209+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:14.828361+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:15.828508+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:16.828643+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:17.828765+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:18.828893+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:19.829030+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:20.829157+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:21.829292+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:22.829394+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:23.829470+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:24.829573+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:25.829689+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:26.829799+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:27.829934+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:28.830040+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:29.830182+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:30.830334+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:31.830477+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:32.830631+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:33.830735+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:34.830877+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:35.831013+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:36.831117+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:37.831253+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:38.831360+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:39.831527+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:40.831632+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:41.831765+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:42.831873+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:43.832003+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:44.832107+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 319488 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7219 writes, 28K keys, 7219 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7219 writes, 1518 syncs, 4.76 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.5      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fdde7f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:45.832242+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:46.832386+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:47.832548+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:48.832689+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:49.832825+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:50.832952+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:51.833076+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:52.833165+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:53.833261+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:54.833393+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:55.833473+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:56.833574+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 286720 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:57.833738+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:58.833867+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:59.834019+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:00.834170+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:01.834308+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:02.834403+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:03.834515+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 278528 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:04.834653+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:05.834783+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:06.834883+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:07.834976+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:08.835077+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:09.835202+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:10.835316+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:11.835463+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:12.835546+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:13.835673+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:14.835807+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:15.835915+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:16.836046+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:17.836171+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:18.836300+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:19.836463+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:20.836592+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:21.836734+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:22.836861+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:23.836992+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:24.837087+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:25.837199+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:26.837285+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:27.837385+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:28.837512+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:29.837679+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 409600 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:30.837820+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:31.837967+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:32.838097+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:33.838222+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:34.838334+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:35.838463+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.917694092s of 299.925079346s, submitted: 22
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:36.838592+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:37.838723+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:38.838818+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:39.838976+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:40.839106+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:41.839230+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:42.839349+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:43.839473+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:44.839607+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:45.839751+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:46.839881+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:47.839984+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:48.840092+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:49.840245+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:50.840372+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:51.840589+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:52.840735+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:53.840885+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:54.841002+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:55.841126+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:56.841256+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:57.841377+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:58.841475+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:59.841650+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:00.841753+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:01.841869+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:02.841970+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:03.842060+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:04.842214+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:05.842327+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:06.842477+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:07.842612+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:08.842706+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:09.842899+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:10.843037+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:11.843173+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:12.843300+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:13.843395+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:14.843523+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:15.843666+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:16.843780+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:17.843909+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:18.844030+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:19.844141+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:20.844272+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:21.844397+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:22.844524+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:23.844684+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:24.844800+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:25.844927+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:26.845064+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:27.845191+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:28.845296+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:29.845451+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:30.845572+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:31.845698+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:32.845808+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:33.845929+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:34.846049+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:35.846169+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:36.846320+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:37.846468+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:38.846594+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:39.846693+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:40.846786+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:41.846877+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:42.846976+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:43.847072+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:44.847169+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:45.847310+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:46.847466+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:47.847593+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:48.847689+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:49.847799+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:50.847901+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:51.848003+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:52.848116+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:53.848214+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:54.848320+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1433600 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:55.848415+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce5a000/0x0/0x4ffc00000, data 0x11bb17/0x1d2000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 1327104 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:56.848568+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'config diff' '{prefix=config diff}'
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'config show' '{prefix=config show}'
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 1851392 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:57.848664+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:29 compute-0 ceph-osd[86142]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:29 compute-0 ceph-osd[86142]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 1794048 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:29 compute-0 ceph-osd[86142]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993721 data_alloc: 218103808 data_used: 7527
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: tick
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_tickets
Dec 13 07:36:29 compute-0 ceph-osd[86142]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:58.848760+0000)
Dec 13 07:36:29 compute-0 ceph-osd[86142]: do_command 'log dump' '{prefix=log dump}'
Dec 13 07:36:29 compute-0 ceph-mon[74928]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:29 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1235680705' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 07:36:29 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/203291855' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 07:36:29 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2826344808' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 07:36:29 compute-0 rsyslogd[962]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 07:36:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 13 07:36:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/568187850' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 07:36:29 compute-0 rsyslogd[962]: imjournal from <np0005558317:ceph-osd>: begin to drop messages due to rate-limiting
Dec 13 07:36:29 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 13 07:36:29 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174135580' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 13 07:36:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044739193' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 13 07:36:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888847580' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 13 07:36:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028533829' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 13 07:36:30 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3908478244' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/568187850' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1174135580' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2044739193' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3888847580' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: pgmap v792: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:30 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3028533829' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 07:36:30 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3908478244' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 13 07:36:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793410689' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 13 07:36:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1148040650' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 07:36:31 compute-0 sudo[248597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:36:31 compute-0 sudo[248597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:31 compute-0 sudo[248597]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 07:36:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271236380' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 07:36:31 compute-0 sudo[248639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 07:36:31 compute-0 sudo[248639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 13 07:36:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046310430' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1793410689' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1148040650' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4271236380' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4046310430' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 07:36:31 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 13 07:36:31 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201994957' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 07:36:31 compute-0 podman[248757]: 2025-12-13 07:36:31.947541004 +0000 UTC m=+0.075550359 container exec 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 13 07:36:32 compute-0 podman[248757]: 2025-12-13 07:36:32.031842084 +0000 UTC m=+0.159851449 container exec_died 4656a144eefb5f6bf26a1e6dd6df77bfd9faa9dce17b22c42a655c087745995a (image=quay.io/ceph/ceph:v20, name=ceph-00fdae1b-7fad-5f1b-8734-ba4d9298a6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:36:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 13 07:36:32 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781187778' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 07:36:32 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14510 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:32 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14512 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:32 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:32 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3201994957' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 07:36:32 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2781187778' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 07:36:32 compute-0 ceph-mon[74928]: from='client.14510 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:32 compute-0 ceph-mon[74928]: from='client.14512 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:32 compute-0 ceph-mon[74928]: pgmap v793: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:32 compute-0 sudo[248639]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:36:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:32 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:36:32 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:32 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:32 compute-0 sudo[249014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:36:32 compute-0 sudo[249014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:32 compute-0 sudo[249014]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:33 compute-0 sudo[249045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 07:36:33 compute-0 sudo[249045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:33 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14516 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:33 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 581632 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 362131 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:53.520402+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:23.328811+0000 osd.0 (osd.0) 26 : cluster [DBG] 6.0 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:23.339396+0000 osd.0 (osd.0) 27 : cluster [DBG] 6.0 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 573440 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 27)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:23.328811+0000 osd.0 (osd.0) 26 : cluster [DBG] 6.0 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:23.339396+0000 osd.0 (osd.0) 27 : cluster [DBG] 6.0 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:54.520553+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 565248 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:55.520664+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 565248 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:56.520800+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 557056 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:57.520908+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 548864 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 364542 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019509315s of 10.025353432s, submitted: 10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:58.521019+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:28.406280+0000 osd.0 (osd.0) 28 : cluster [DBG] 6.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:28.416866+0000 osd.0 (osd.0) 29 : cluster [DBG] 6.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 532480 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 29)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:28.406280+0000 osd.0 (osd.0) 28 : cluster [DBG] 6.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:28.416866+0000 osd.0 (osd.0) 29 : cluster [DBG] 6.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:14:59.521182+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:29.451286+0000 osd.0 (osd.0) 30 : cluster [DBG] 6.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:29.461874+0000 osd.0 (osd.0) 31 : cluster [DBG] 6.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 524288 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 31)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:29.451286+0000 osd.0 (osd.0) 30 : cluster [DBG] 6.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:29.461874+0000 osd.0 (osd.0) 31 : cluster [DBG] 6.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:00.521304+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:30.478398+0000 osd.0 (osd.0) 32 : cluster [DBG] 4.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:30.489002+0000 osd.0 (osd.0) 33 : cluster [DBG] 4.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 491520 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 33)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:30.478398+0000 osd.0 (osd.0) 32 : cluster [DBG] 4.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:30.489002+0000 osd.0 (osd.0) 33 : cluster [DBG] 4.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:01.521423+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 491520 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:02.521514+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 34 sent 33 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:32.513102+0000 osd.0 (osd.0) 34 : cluster [DBG] 6.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 466944 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 374192 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 34)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:32.513102+0000 osd.0 (osd.0) 34 : cluster [DBG] 6.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:03.521687+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 35 sent 34 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:32.523677+0000 osd.0 (osd.0) 35 : cluster [DBG] 6.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 466944 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 35)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:32.523677+0000 osd.0 (osd.0) 35 : cluster [DBG] 6.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:04.521877+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 450560 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:05.522010+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 417792 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:06.522092+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 417792 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:07.522203+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:37.488429+0000 osd.0 (osd.0) 36 : cluster [DBG] 6.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:37.502584+0000 osd.0 (osd.0) 37 : cluster [DBG] 6.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 401408 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 376603 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 37)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:37.488429+0000 osd.0 (osd.0) 36 : cluster [DBG] 6.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:37.502584+0000 osd.0 (osd.0) 37 : cluster [DBG] 6.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:08.522377+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61382656 unmapped: 393216 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: handle_auth_request added challenge on 0x557f63ea4800
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:09.522507+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 237568 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.008483887s of 12.015250206s, submitted: 10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:10.522659+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:40.421604+0000 osd.0 (osd.0) 38 : cluster [DBG] 6.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:40.432205+0000 osd.0 (osd.0) 39 : cluster [DBG] 6.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 1277952 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 39)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:40.421604+0000 osd.0 (osd.0) 38 : cluster [DBG] 6.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:40.432205+0000 osd.0 (osd.0) 39 : cluster [DBG] 6.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:11.522828+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 1269760 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:12.522916+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 1269760 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 379016 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:13.523035+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:43.398221+0000 osd.0 (osd.0) 40 : cluster [DBG] 4.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:43.408808+0000 osd.0 (osd.0) 41 : cluster [DBG] 4.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 1269760 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 41)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:43.398221+0000 osd.0 (osd.0) 40 : cluster [DBG] 4.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:43.408808+0000 osd.0 (osd.0) 41 : cluster [DBG] 4.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:14.523333+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:44.421947+0000 osd.0 (osd.0) 42 : cluster [DBG] 4.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:44.432559+0000 osd.0 (osd.0) 43 : cluster [DBG] 4.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 1245184 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 43)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:44.421947+0000 osd.0 (osd.0) 42 : cluster [DBG] 4.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:44.432559+0000 osd.0 (osd.0) 43 : cluster [DBG] 4.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 43 handle_osd_map epochs [43,44], i have 44, src has [1,44]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 44 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x3024c/0x70000, compress 0x0/0x0/0x0, omap 0x6362, meta 0x1a29c9e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:15.523474+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 1245184 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:16.523598+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 1236992 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:17.523705+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:47.469853+0000 osd.0 (osd.0) 44 : cluster [DBG] 6.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:47.480464+0000 osd.0 (osd.0) 45 : cluster [DBG] 6.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 1228800 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389021 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 44 heartbeat osd_stat(store_statfs(0x4fe157000/0x0/0x4ffc00000, data 0x31de8/0x73000, compress 0x0/0x0/0x0, omap 0x65ed, meta 0x1a29a13), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 44 handle_osd_map epochs [45,46], i have 44, src has [1,46]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 44 handle_osd_map epochs [45,46], i have 46, src has [1,46]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 45)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:47.469853+0000 osd.0 (osd.0) 44 : cluster [DBG] 6.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:47.480464+0000 osd.0 (osd.0) 45 : cluster [DBG] 6.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:18.523847+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:48.477291+0000 osd.0 (osd.0) 46 : cluster [DBG] 6.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:15:48.487885+0000 osd.0 (osd.0) 47 : cluster [DBG] 6.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 1105920 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 47)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:48.477291+0000 osd.0 (osd.0) 46 : cluster [DBG] 6.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:15:48.487885+0000 osd.0 (osd.0) 47 : cluster [DBG] 6.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:19.523993+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 1220608 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 47 handle_osd_map epochs [48,49], i have 47, src has [1,49]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:20.524121+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 49 heartbeat osd_stat(store_statfs(0x4fe144000/0x0/0x4ffc00000, data 0x3aaf6/0x82000, compress 0x0/0x0/0x0, omap 0x6d8e, meta 0x1a29272), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1089536 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.964981079s of 10.975081444s, submitted: 14
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:21.524249+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1089536 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:22.524350+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 1081344 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409680 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:23.524488+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1048576 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:24.524617+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1048576 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:25.524718+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 1040384 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe143000/0x0/0x4ffc00000, data 0x3c545/0x85000, compress 0x0/0x0/0x0, omap 0x7019, meta 0x1a28fe7), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:26.524825+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000035
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000089 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000081 1 0.000055
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000096 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000093 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000100 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007708 2 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.007374 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006012 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005007 2 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004751 2 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.004520 2 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003998 2 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.003590 2 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003427 2 0.000017
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002545 2 0.000060
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002187 2 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000050
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000050 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000144
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000062 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000153
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000155 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000117 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000134 1 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000181 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000048 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000065 1 0.000145
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000055 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000145
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000171 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000103 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000035
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000147 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000110 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000090 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000066 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000079 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000189 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000086 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000035 1 0.000046
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000091 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000074 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000010
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000080 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000007
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000109 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000226 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000103 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000079 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000129 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=0 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000166 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=0 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000180 1 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=0 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.028081 2 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022725 2 0.000051
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018066 2 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015527 2 0.000051
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014980 2 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014659 2 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017381 2 0.000071
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012159 2 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011330 2 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.010042 2 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007780 2 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007148 2 0.000017
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006622 2 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006191 2 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005424 2 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004483 2 0.000018
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001934 2 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002209 2 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.013162 2 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.013523 2 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007869 2 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 51 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 212992 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.873513 2 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.880418 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.873537 2 0.000016
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.875608 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.902692 2 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.902802 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.902822 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875351 2 0.000033
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.903539 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.900521 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904350 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000429 1 0.000467
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.874177 2 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.876603 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.901401 2 0.000035
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.908879 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.878939 2 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.874318 2 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.879036 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.881536 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.879050 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.901308 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.905378 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000044 1 0.000065
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.874442 2 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.879968 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.874528 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.879077 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.881935 2 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.882030 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.882043 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.878164 2 0.000121
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.878342 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.878354 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000046 1 0.000063
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.881125 2 0.000054
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.881244 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.881260 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000108 1 0.000120
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000031 1 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902072 2 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.909906 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.892858 2 0.000054
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.893053 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000086 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.893068 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000023 1 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875270 2 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.893434 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.886689 2 0.000039
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.886785 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.886798 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.890545 2 0.000072
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.890728 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000027 1 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.890747 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875222 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.892740 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.901897 2 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.904588 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000124 1 0.000137
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.889169 2 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.889275 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.889288 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000065 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875515 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.891160 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000029 1 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875469 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.890305 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875568 2 0.000015
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.890682 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.875263 2 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.888888 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.902137 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.904394 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.889697 2 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.889778 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.889791 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000178 1 0.000169
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.876365 2 0.000017
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.884252 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903534 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.908361 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.905088 2 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.905202 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.905213 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000081 1 0.000095
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.876892 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.889201 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.885817 2 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.885920 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.885935 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000022 1 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.876995 2 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.887142 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.904115 2 0.000054
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910225 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.877137 2 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.885069 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.904350 2 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.908013 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.891442 2 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.891545 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.891557 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000030 1 0.000043
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.904943 2 0.000012
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.910029 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 lc 0'0 (0'0,43'66] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.878087 2 0.000014
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.889571 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.877966 2 0.000017
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.891350 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.905285 2 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.909914 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 0'0 (0'0,48'67] local-lis/les=47/48 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.885974 2 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.886065 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.886079 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000047 1 0.000068
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.878953 2 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.901805 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=45/46 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.884699 2 0.000033
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.884776 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.884789 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000027 1 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.897229 2 0.000094
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.897616 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.897714 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.878791 2 0.000018
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.885053 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 lc 0'0 (0'0,43'2] local-lis/les=49/50 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000101 1 0.000314
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005721 3 0.000057
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005941 3 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.17( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005882 3 0.000046
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005710 3 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005495 3 0.000051
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005571 3 0.000069
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005418 3 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.19( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005318 4 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1e( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005174 3 0.000059
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.1( v 43'2 (0'0,43'2] local-lis/les=51/52 n=1 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005091 3 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.f( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005046 4 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004912 3 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005923 4 0.000408
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004988 3 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.004601 3 0.000332
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003953 3 0.000080
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.6( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003783 4 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.7( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003599 3 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003477 3 0.000044
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003041 3 0.000098
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.14( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003389 4 0.000079
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.4( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002652 4 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.8( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002643 3 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.4( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000484 1 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003174 4 0.001044
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 lc 43'54 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002610 3 0.000060
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005296 4 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005161 4 0.001194
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.17( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.1( v 43'66 (0'0,43'66] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005523 4 0.000478
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.16( v 43'66 (0'0,43'66] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=43'66 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002597 4 0.000117
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002425 3 0.000043
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002249 3 0.000057
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.10( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 52 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004886 3 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[11.e( v 43'2 (0'0,43'2] local-lis/les=51/52 n=0 ec=49/41 lis/c=51/49 les/c/f=52/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=43'2 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.037649 2 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.e( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.038149 2 0.000013
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 lc 43'55 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:27.524917+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125519 1 0.000200
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.d( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.163671 3 0.000016
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.154362 1 0.000044
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.317941 3 0.000013
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 778240 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489262 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.259186 1 0.000035
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/52 n=1 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.576916 2 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 lc 43'53 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010569 1 0.000116
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.15( v 48'67 (0'0,48'67] local-lis/les=51/52 n=0 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.587496 3 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.092983 1 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/52 n=0 ec=45/35 lis/c=51/45 les/c/f=52/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=36'6 mlcod 36'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.680248 2 0.000019
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 lc 43'58 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010501 1 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 52 pg[10.9( v 48'67 (0'0,48'67] local-lis/les=51/52 n=1 ec=47/39 lis/c=51/47 les/c/f=52/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=48'67 mlcod 48'67 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:28.525029+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 532480 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: handle_auth_request added challenge on 0x557f63ea4400
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.329772 5 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.329636 5 0.000051
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 48'552 lc 0'0 (0'0,48'552] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=48'552 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.329987 5 0.000049
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 48'552 lc 0'0 (0'0,48'552] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=48'552 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 48'552 lc 0'0 (0'0,48'552] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=48'552 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.330859 5 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.332109 5 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.332745 5 0.000053
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.333597 5 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.333704 5 0.000088
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.333933 5 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.334067 5 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.334261 5 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.334228 5 0.000107
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.334671 5 0.000138
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.334473 5 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.335091 5 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.332601 5 0.000018
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.7( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=43'551 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002323 4 0.000079
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000033 1 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 lc 42'122 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 53 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043252 1 0.000012
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 42'113 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.045692 4 0.000089
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 42'113 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 42'113 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000045 1 0.000037
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 lc 42'113 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052820 1 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.098472 4 0.000044
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000038 1 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 lc 42'141 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031485 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 42'126 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.130049 4 0.000039
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 42'126 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 42'126 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000046 1 0.000044
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 lc 42'126 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038617 1 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.168835 4 0.000045
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000055 1 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 lc 42'41 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059829 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 42'69 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.228684 4 0.000050
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 42'69 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 42'69 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000043 1 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 lc 42'69 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059771 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 42'84 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.288654 4 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 42'84 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 42'84 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000107 1 0.000083
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 lc 42'84 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059756 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 43'158 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.348607 4 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 43'158 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 43'158 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000090 1 0.000090
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 lc 43'158 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066813 1 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 42'153 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.415653 4 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 42'153 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 42'153 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000053
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 lc 42'153 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038625 1 0.000056
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.454367 4 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000069 1 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 lc 42'133 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038528 1 0.000050
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 43'159 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.493573 4 0.000058
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 43'159 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 43'159 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000058 1 0.000048
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 lc 43'159 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031512 1 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 42'120 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.524816 4 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 42'120 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 42'120 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000036 1 0.000074
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 lc 42'120 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.024605 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.549503 4 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000062 1 0.000057
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 lc 42'37 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031505 1 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.581205 4 0.000065
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000059
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 lc 42'56 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066845 1 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 53'554 lc 0'0 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.648815 4 0.000080
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 53'554 lc 0'0 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 53'554 lc 0'0 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000312 1 0.000049
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 53 pg[9.5( v 53'554 lc 0'0 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe11e000/0x0/0x4ffc00000, data 0x42d6c/0xac000, compress 0x0/0x0/0x0, omap 0x77ba, meta 0x1a28846), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.092780 1 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.674003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009201 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.124667 1 0.000124
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.674261 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009007 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000179 1 0.000435
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000156 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.026414 1 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.674613 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009103 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000430 1 0.000471
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.182129 1 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.675182 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009520 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000078 1 0.000129
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000082 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.221071 1 0.000087
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.675686 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009968 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000037 1 0.000200
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.260388 1 0.000082
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.676041 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009995 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.387498 1 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000236 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.327580 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.676172 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009957 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000038 1 0.000052
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.676124 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.010315 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.447641 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.676424 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.010040 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000048
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000120 1 0.000324
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000417 1 0.000447
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000087 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.507960 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.676744 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009506 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000049
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.546850 1 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.676901 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.009029 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000043
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.151958 1 0.000036
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.677179 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.008054 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000049
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.578967 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.677600 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.631972 1 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.677646 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.007440 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.007273 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000072 1 0.000092
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000115 1 0.000206
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000047 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004358 2 0.000212
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002648 2 0.000187
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003109 2 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002628 2 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002472 2 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003058 2 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003204 2 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002099 2 0.000160
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004759 2 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002955 2 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003632 2 0.000269
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004162 2 0.000343
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003410 2 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005869 2 0.000020
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002179 2 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002301 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002281 2 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002343 2 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002367 2 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=21
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=21
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002461 2 0.000044
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002394 2 0.000059
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002545 2 0.000163
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002604 2 0.000171
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002432 2 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002667 2 0.000024
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001724 2 0.000056
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001752 2 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003034 2 0.000139
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.075077 4 0.000297
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.723363 7 0.000071
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000062 1 0.000051
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.7( v 43'551 lc 42'63 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:29.525115+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052793 1 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 54 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000072 2 0.000047
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006653 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999011 2 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006671 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999674 2 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006843 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999670 2 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006313 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999682 2 0.000088
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006030 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000647 2 0.000056
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005653 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000010 2 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005783 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000698 2 0.000133
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006152 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000091 2 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006069 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.906707 1 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 1.683271 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 3.015899 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.959875 1 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped mbc={}] exit Started/ReplicaActive 1.684237 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped mbc={}] exit Started 3.014246 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 pct=0'0 crt=48'552 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 pct=0'0 crt=48'552 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000350
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] exit Reset 0.000037 1 0.000118
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001961 2 0.000391
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007173 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001886 2 0.000094
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007367 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000973 2 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007137 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002676 2 0.000110
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007356 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002562 2 0.000594
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007859 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=52/53 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001108 2 0.000979
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001535 2 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=0/0 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003826 4 0.000046
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003719 4 0.000038
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003558 4 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003477 4 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.3( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003433 4 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000140 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003385 4 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003493 4 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003341 4 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001830 4 0.000393
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000208 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.1d( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000714 2 0.000025
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001076 4 0.000163
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001276 4 0.000630
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.11( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001736 4 0.000605
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001991 4 0.000067
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004573 4 0.000284
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.9( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001188 2 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.b( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000050 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/47 les/c/f=55/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000025 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 55 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:30.525222+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:00.383035+0000 osd.0 (osd.0) 48 : cluster [DBG] 6.a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:00.393647+0000 osd.0 (osd.0) 49 : cluster [DBG] 6.a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1032192 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 49)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:00.383035+0000 osd.0 (osd.0) 48 : cluster [DBG] 6.a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:00.393647+0000 osd.0 (osd.0) 49 : cluster [DBG] 6.a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.927689552s of 10.056975365s, submitted: 421
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999290 2 0.000626
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999935 2 0.000023
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002283 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=48'552 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001907 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=52/53 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000547 3 0.000194
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.5( v 53'554 (0'0,53'554] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'554 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000885 3 0.000318
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 56 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/47 les/c/f=56/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:31.525380+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1024000 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:32.525497+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 674543 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 56 heartbeat osd_stat(store_statfs(0x4fe108000/0x0/0x4ffc00000, data 0x48193/0xbc000, compress 0x0/0x0/0x0, omap 0x7f5b, meta 0x1a280a5), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:33.525622+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 909312 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:34.525739+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:04.425045+0000 osd.0 (osd.0) 50 : cluster [DBG] 4.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:04.435714+0000 osd.0 (osd.0) 51 : cluster [DBG] 4.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 51)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:04.425045+0000 osd.0 (osd.0) 50 : cluster [DBG] 4.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:04.435714+0000 osd.0 (osd.0) 51 : cluster [DBG] 4.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:35.525856+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 901120 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 57 heartbeat osd_stat(store_statfs(0x4fe10b000/0x0/0x4ffc00000, data 0x49d2f/0xbf000, compress 0x0/0x0/0x0, omap 0x81e6, meta 0x1a27e1a), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:36.525960+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 901120 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:37.526062+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:07.405385+0000 osd.0 (osd.0) 52 : cluster [DBG] 4.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:07.416108+0000 osd.0 (osd.0) 53 : cluster [DBG] 4.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 884736 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679231 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 53)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:07.405385+0000 osd.0 (osd.0) 52 : cluster [DBG] 4.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:07.416108+0000 osd.0 (osd.0) 53 : cluster [DBG] 4.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:38.526203+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 876544 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 59 heartbeat osd_stat(store_statfs(0x4fe106000/0x0/0x4ffc00000, data 0x4b8cb/0xc2000, compress 0x0/0x0/0x0, omap 0x8471, meta 0x1a27b8f), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:39.526296+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 868352 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:40.526417+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 868352 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:41.526530+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 860160 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:42.526644+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 59 handle_osd_map epochs [60,61], i have 59, src has [1,61]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.116958618s of 11.128263474s, submitted: 15
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 1925120 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687979 data_alloc: 218103808 data_used: 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:43.526753+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 1925120 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:44.526863+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 61 handle_osd_map epochs [63,64], i have 61, src has [1,64]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 61 handle_osd_map epochs [62,64], i have 61, src has [1,64]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.256074 20 0.000055
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.259671 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 15.266005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started 15.266130 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743862152s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949691772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] exit Reset 0.000044 1 0.000074
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.256164 20 0.000046
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.259537 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 15.265614 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started 15.265629 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743666649s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949722290s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] exit Reset 0.000031 1 0.000054
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743840218s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949691772s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743650436s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949722290s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.256462 20 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.258350 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 15.265557 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started 15.265583 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743508339s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 active pruub 131.949981689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] exit Reset 0.000104 1 0.000172
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=55) [0] r=0 lpr=55 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.255233 18 0.000098
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=55) [0] r=0 lpr=55 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.256324 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=55) [0] r=0 lpr=55 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 14.258291 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=9.743436813s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY pruub 131.949981689s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=55) [0] r=0 lpr=55 crt=43'551 mlcod 0'0 active mbc={}] exit Started 14.258710 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=55) [0] r=0 lpr=55 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744418144s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 active pruub 132.951171875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] exit Reset 0.000106 1 0.000401
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 64 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.744332314s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY pruub 132.951171875s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 64 handle_osd_map epochs [60,64], i have 64, src has [1,64]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 1810432 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe0fd000/0x0/0x4ffc00000, data 0x50b9f/0xcb000, compress 0x0/0x0/0x0, omap 0x8987, meta 0x1a27679), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:45.526964+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.001068 3 0.000033
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.001096 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=-1 lpr=64 pi=[55,64)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000039 1 0.000060
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.001539 3 0.000111
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.001631 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.002134 3 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.002150 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000031 1 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000064 1 0.000099
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 1 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.002482 3 0.000240
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.002497 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=-1 lpr=64 pi=[54,64)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000020 1 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 1 0.000021
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000160 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000030 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 65 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 1777664 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:46.527077+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003282 4 0.000042
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003352 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=55/56 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003068 4 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003124 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002833 4 0.000250
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003129 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003111 4 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003158 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001955 5 0.000184
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000119 1 0.000085
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000299 1 0.000035
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.002649 5 0.000352
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002747 5 0.000120
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002853 5 0.000398
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049517 2 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.049124 1 0.000069
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000390 1 0.000035
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 1687552 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059488 2 0.000079
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.109034 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000240 1 0.000050
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038468 2 0.000058
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.147734 1 0.000046
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000230 1 0.000057
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.031411 2 0.000056
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 66 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:47.527165+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 3 last_log 56 sent 53 num 3 unsent 3 sending 3
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:16.549487+0000 osd.0 (osd.0) 54 : cluster [DBG] 5.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:16.560166+0000 osd.0 (osd.0) 55 : cluster [DBG] 5.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:17.517490+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.839141 1 0.000117
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.989796 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.992966 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.992980 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012885094s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214294434s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] exit Reset 0.000081 1 0.000109
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.012840271s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214294434s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.878513 1 0.000162
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.990536 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.993688 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.808161 1 0.000062
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.990644 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.993812 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.993865 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.993865 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011977196s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214172363s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] exit Reset 0.000063 1 0.000331
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011938095s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214172363s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.938701 1 0.000132
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.991102 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.994470 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011684418s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 active pruub 140.214202881s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] exit Reset 0.000599 1 0.000688
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.994908 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] exit Start 0.000099 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.011443138s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY pruub 140.214202881s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.010023117s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 active pruub 140.213317871s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] exit Reset 0.000091 1 0.001213
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 67 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=15.009953499s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY pruub 140.213317871s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 56)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:16.549487+0000 osd.0 (osd.0) 54 : cluster [DBG] 5.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:16.560166+0000 osd.0 (osd.0) 55 : cluster [DBG] 5.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:17.517490+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 1597440 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 709691 data_alloc: 218103808 data_used: 1164
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:48.527309+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 57 sent 56 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:17.528065+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 57)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:17.528065+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 1589248 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fe0ed000/0x0/0x4ffc00000, data 0x5af8f/0xdd000, compress 0x0/0x0/0x0, omap 0x93b3, meta 0x1a26c4d), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.617536 6 0.000218
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.616483 6 0.000618
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.616424 6 0.000055
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.618370 6 0.000198
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000448 2 0.000069
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000550 2 0.000060
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000498 2 0.000139
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000512 2 0.000209
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052791 2 0.000184
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053289 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.17( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.669970 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104751 2 0.000098
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105335 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.7( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=7 ec=47/37 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=-1 lpr=67 pi=[55,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.721817 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.141654 2 0.000098
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.142195 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.1f( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=6 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.760684 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.200858 2 0.000127
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.201551 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 68 pg[9.f( v 43'551 (0'0,43'551] lb MIN local-lis/les=65/66 n=7 ec=47/37 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=-1 lpr=67 pi=[54,67)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.819147 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:49.527466+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 1523712 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:50.527633+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1482752 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:51.527756+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 70 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x5fba9/0xdd000, compress 0x0/0x0/0x0, omap 0x9b54, meta 0x1a264ac), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1474560 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:52.527861+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:22.401945+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:22.412542+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 59)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:22.401945+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:22.412542+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 1433600 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675751 data_alloc: 218103808 data_used: 593
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.756451607s of 10.787726402s, submitted: 54
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:53.528028+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:23.369703+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:23.380284+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 61)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:23.369703+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:23.380284+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1425408 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 70 heartbeat osd_stat(store_statfs(0x4fe0ef000/0x0/0x4ffc00000, data 0x5fba9/0xdd000, compress 0x0/0x0/0x0, omap 0x9b54, meta 0x1a264ac), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:54.528159+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:24.345086+0000 osd.0 (osd.0) 62 : cluster [DBG] 2.13 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:24.355665+0000 osd.0 (osd.0) 63 : cluster [DBG] 2.13 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 63)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:24.345086+0000 osd.0 (osd.0) 62 : cluster [DBG] 2.13 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:24.355665+0000 osd.0 (osd.0) 63 : cluster [DBG] 2.13 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1425408 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:55.528297+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:25.305378+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:25.316007+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 65)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:25.305378+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:25.316007+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 1425408 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:56.528473+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 70 handle_osd_map epochs [71,72], i have 70, src has [1,72]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fe0ef000/0x0/0x4ffc00000, data 0x5fba9/0xdd000, compress 0x0/0x0/0x0, omap 0x9b54, meta 0x1a264ac), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1400832 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fe0ef000/0x0/0x4ffc00000, data 0x5fba9/0xdd000, compress 0x0/0x0/0x0, omap 0x9b54, meta 0x1a264ac), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:57.528585+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1392640 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689222 data_alloc: 218103808 data_used: 1139
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:58.528696+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1392640 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:15:59.528806+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 1359872 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:00.528925+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 1351680 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:01.529046+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 1253376 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 76 heartbeat osd_stat(store_statfs(0x4fe0da000/0x0/0x4ffc00000, data 0x68506/0xec000, compress 0x0/0x0/0x0, omap 0xa580, meta 0x1a25a80), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:02.529172+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:32.311348+0000 osd.0 (osd.0) 66 : cluster [DBG] 5.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:32.322019+0000 osd.0 (osd.0) 67 : cluster [DBG] 5.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 67)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:32.311348+0000 osd.0 (osd.0) 66 : cluster [DBG] 5.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:32.322019+0000 osd.0 (osd.0) 67 : cluster [DBG] 5.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1286144 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 701571 data_alloc: 218103808 data_used: 1139
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:03.529339+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.325613976s of 10.340367317s, submitted: 50
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 1261568 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0x69f7f/0xef000, compress 0x0/0x0/0x0, omap 0xa80b, meta 0x1a257f5), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:04.529472+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 1253376 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:05.529594+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:35.352186+0000 osd.0 (osd.0) 68 : cluster [DBG] 2.11 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:35.362772+0000 osd.0 (osd.0) 69 : cluster [DBG] 2.11 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 69)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:35.352186+0000 osd.0 (osd.0) 68 : cluster [DBG] 2.11 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:35.362772+0000 osd.0 (osd.0) 69 : cluster [DBG] 2.11 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 1253376 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:06.529723+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 77 handle_osd_map epochs [78,79], i have 77, src has [1,79]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1245184 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:07.529826+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:37.287101+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.12 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:37.297685+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.12 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 71)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:37.287101+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.12 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:37.297685+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.12 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1236992 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718521 data_alloc: 218103808 data_used: 2328
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:08.529987+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:38.260812+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:38.271488+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 73)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:38.260812+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:38.271488+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1228800 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:09.530133+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1236992 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fe0cf000/0x0/0x4ffc00000, data 0x70ca2/0xfb000, compress 0x0/0x0/0x0, omap 0xafac, meta 0x1a25054), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:10.530236+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 80 handle_osd_map epochs [80,81], i have 81, src has [1,81]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1228800 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:11.530365+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:41.306756+0000 osd.0 (osd.0) 74 : cluster [DBG] 3.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:41.317353+0000 osd.0 (osd.0) 75 : cluster [DBG] 3.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 75)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:41.306756+0000 osd.0 (osd.0) 74 : cluster [DBG] 3.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:41.317353+0000 osd.0 (osd.0) 75 : cluster [DBG] 3.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1187840 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:12.530513+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 1327104 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 731615 data_alloc: 218103808 data_used: 4083
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0ce000/0x0/0x4ffc00000, data 0x7283e/0xfe000, compress 0x0/0x0/0x0, omap 0xb237, meta 0x1a24dc9), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:13.530621+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1318912 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0c9000/0x0/0x4ffc00000, data 0x743da/0x101000, compress 0x0/0x0/0x0, omap 0xb4c2, meta 0x1a24b3e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:14.530727+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1318912 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.188519478s of 11.201033592s, submitted: 15
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 44.458314 77 0.000113
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 44.459498 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 45.467392 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started 45.467422 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.542116165s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 active pruub 163.950912476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] exit Reset 0.000381 1 0.000528
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] exit Start 0.000095 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 83 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.541775703s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY pruub 163.950912476s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:15.530814+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 0.922263 3 0.000181
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 0.922437 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=83) [2] r=-1 lpr=83 pi=[54,83)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000271 1 0.000333
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000086 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004371 2 0.000225
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 84 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1245184 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:16.530927+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 84 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998422 3 0.000166
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002979 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.002622 5 0.000229
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000044 1 0.000041
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000628 1 0.000009
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 1220608 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042392 2 0.000108
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 85 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:17.531028+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:47.273969+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.13 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:47.284560+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.13 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.825046 1 0.000048
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 0.871299 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 1.874501 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 1.874657 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131115913s) [2] async=[2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 active pruub 170.337738037s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] exit Reset 0.000137 1 0.000718
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] exit Start 0.000046 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 86 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86 pruub=15.131011963s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY pruub 170.337738037s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 77)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:47.273969+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.13 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:47.284560+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.13 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 1155072 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745372 data_alloc: 218103808 data_used: 4345
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:18.531186+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:48.267121+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.8 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:48.277861+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.8 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.255660 90 0.000787
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.260858 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 49.267524 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started 49.267715 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.744177818s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 active pruub 171.950607300s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] exit Reset 0.000296 1 0.000340
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=15.743911743s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY pruub 171.950607300s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 87 handle_osd_map epochs [85,87], i have 87, src has [1,87]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.004722 7 0.000145
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000072 1 0.000105
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 DELETING pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045371 2 0.000131
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.045493 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 87 pg[9.13( v 43'551 (0'0,43'551] lb MIN local-lis/les=84/85 n=6 ec=47/37 lis/c=84/54 les/c/f=85/55/0 sis=86) [2] r=-1 lpr=86 pi=[54,86)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.050345 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 79)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:48.267121+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.8 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:48.277861+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.8 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 1130496 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fcf1a000/0x0/0x4ffc00000, data 0x7cab6/0x10e000, compress 0x0/0x0/0x0, omap 0xc179, meta 0x2bc3e87), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:19.531358+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.005725 3 0.000029
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.005763 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=-1 lpr=87 pi=[54,87)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000096 1 0.000126
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002089 2 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000179 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 88 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 1130496 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:20.531491+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002023 3 0.000244
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004358 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001344 5 0.000287
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000132 1 0.000060
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000317 1 0.000022
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028375 2 0.000083
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 89 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1056768 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:21.531609+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:51.232963+0000 osd.0 (osd.0) 80 : cluster [DBG] 7.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:51.243470+0000 osd.0 (osd.0) 81 : cluster [DBG] 7.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 81)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:51.232963+0000 osd.0 (osd.0) 80 : cluster [DBG] 7.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:51.243470+0000 osd.0 (osd.0) 81 : cluster [DBG] 7.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.982023 1 0.000188
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.012447 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.016830 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.016852 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988631248s) [1] async=[1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 active pruub 174.218078613s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] exit Reset 0.000085 1 0.000127
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 90 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=14.988577843s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY pruub 174.218078613s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1032192 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:22.531744+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.007050 7 0.000113
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000095
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 DELETING pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030494 2 0.000115
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030620 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 91 pg[9.15( v 43'551 (0'0,43'551] lb MIN local-lis/les=88/89 n=6 ec=47/37 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=-1 lpr=90 pi=[54,90)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.037821 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 1024000 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746336 data_alloc: 218103808 data_used: 5261
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fcf11000/0x0/0x4ffc00000, data 0x81a03/0x117000, compress 0x0/0x0/0x0, omap 0xc91a, meta 0x2bc36e6), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:23.531846+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:53.170487+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:53.181016+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 83)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:53.170487+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:53.181016+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 1179648 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:24.531986+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:54.165720+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:54.176341+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=0 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=0 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000040
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 85)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:54.165720+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:54.176341+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000071 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000194
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000187 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 1163264 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf0e000/0x0/0x4ffc00000, data 0x84f80/0x11c000, compress 0x0/0x0/0x0, omap 0xce30, meta 0x2bc31d0), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:25.532131+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.800248146s of 10.842355728s, submitted: 60
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 92 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.005104 2 0.000087
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.005335 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005445 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000060 1 0.000090
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 1089536 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:26.532248+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.007912 6 0.000026
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 crt=43'551 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002668 3 0.000132
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000070
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 lc 42'76 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028628 1 0.000052
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 94 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1032192 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fcf09000/0x0/0x4ffc00000, data 0x86a01/0x11f000, compress 0x0/0x0/0x0, omap 0xd0bb, meta 0x2bc2f45), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:27.532361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:57.178748+0000 osd.0 (osd.0) 86 : cluster [DBG] 7.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:57.189315+0000 osd.0 (osd.0) 87 : cluster [DBG] 7.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.918724 1 0.000138
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.950264 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 1.958213 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[64,93)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000080 1 0.000105
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002374 2 0.000031
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000251 2 0.000150
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 95 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 87)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:57.178748+0000 osd.0 (osd.0) 86 : cluster [DBG] 7.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:57.189315+0000 osd.0 (osd.0) 87 : cluster [DBG] 7.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1032192 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772214 data_alloc: 218103808 data_used: 5513
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:28.532509+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:58.171520+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:16:58.182119+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000605 2 0.000033
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003369 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=93/94 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000645 4 0.000104
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 96 pg[9.16( v 43'551 (0'0,43'551] local-lis/les=95/96 n=6 ec=47/37 lis/c=95/64 les/c/f=96/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 89)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:58.171520+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:16:58.182119+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 933888 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fcefd000/0x0/0x4ffc00000, data 0x8bc34/0x129000, compress 0x0/0x0/0x0, omap 0xd85c, meta 0x2bc27a4), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:29.532648+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 892928 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:30.532757+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 60.324959 121 0.000334
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active 60.328851 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary 61.335704 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] exit Started 61.335727 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=54) [0] r=0 lpr=54 crt=43'551 mlcod 0'0 active mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676051140s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 active pruub 179.950790405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] exit Reset 0.000143 1 0.000178
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] exit Start 0.000533 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 97 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=11.676016808s) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY pruub 179.950790405s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 843776 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:31.532868+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:01.205233+0000 osd.0 (osd.0) 90 : cluster [DBG] 3.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:01.215805+0000 osd.0 (osd.0) 91 : cluster [DBG] 3.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 91)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:01.205233+0000 osd.0 (osd.0) 90 : cluster [DBG] 3.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:01.215805+0000 osd.0 (osd.0) 91 : cluster [DBG] 3.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.003983 3 0.000591
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.004560 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=97) [2] r=-1 lpr=97 pi=[54,97)/1 crt=43'551 unknown NOTIFY mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Reset 0.000050 1 0.000070
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000028
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 98 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 835584 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fcef9000/0x0/0x4ffc00000, data 0x8f251/0x12f000, compress 0x0/0x0/0x0, omap 0xdd72, meta 0x2bc228e), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:32.533027+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001634 4 0.000052
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001727 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=54/55 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 99 handle_osd_map epochs [98,99], i have 99, src has [1,99]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 786432 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787516 data_alloc: 218103808 data_used: 5513
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=54/54 les/c/f=55/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.613437 5 0.000538
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000113 1 0.000077
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000263 1 0.000048
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.067032 2 0.000048
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 99 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:33.533178+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.328698 1 0.000078
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009769 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started/Primary 2.011522 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] exit Started 2.011539 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=98) [2]/[0] async=[2] r=0 lpr=98 pi=[54,98)/1 crt=43'551 mlcod 43'551 active+remapped mbc={255={}}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603092194s) [2] async=[2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 active pruub 186.894073486s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] exit Reset 0.000089 1 0.000127
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 100 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.603034973s) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY pruub 186.894073486s@ mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 778240 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fcef3000/0x0/0x4ffc00000, data 0x92872/0x135000, compress 0x0/0x0/0x0, omap 0xe288, meta 0x2bc1d78), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:34.533272+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 770048 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/Stray 1.326133 6 0.000066
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000962 2 0.000107
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 DELETING pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067779 2 0.000142
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068806 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 101 pg[9.19( v 43'551 (0'0,43'551] lb MIN local-lis/les=98/99 n=6 ec=47/37 lis/c=98/54 les/c/f=99/55/0 sis=100) [2] r=-1 lpr=100 pi=[54,100)/1 crt=43'551 unknown NOTIFY mbc={}] exit Started 1.395006 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:35.533410+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 745472 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:36.533600+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 745472 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:37.533734+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 737280 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777421 data_alloc: 218103808 data_used: 5513
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.293404579s of 12.320687294s, submitted: 82
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:38.533866+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:08.074216+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.2 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:08.084821+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.2 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fcef5000/0x0/0x4ffc00000, data 0x94121/0x135000, compress 0x0/0x0/0x0, omap 0xe513, meta 0x2bc1aed), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 737280 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 93)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:08.074216+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.2 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:08.084821+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.2 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:39.533989+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:09.038707+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.2 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:09.049297+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.2 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 737280 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 101 handle_osd_map epochs [101,102], i have 102, src has [1,102]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 95)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:09.038707+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.2 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:09.049297+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.2 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:40.534161+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 720896 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:41.534254+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 720896 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:42.534354+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:12.084035+0000 osd.0 (osd.0) 96 : cluster [DBG] 5.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:12.094328+0000 osd.0 (osd.0) 97 : cluster [DBG] 5.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 720896 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 786708 data_alloc: 218103808 data_used: 5513
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 97)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:12.084035+0000 osd.0 (osd.0) 96 : cluster [DBG] 5.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:12.094328+0000 osd.0 (osd.0) 97 : cluster [DBG] 5.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fcef4000/0x0/0x4ffc00000, data 0x95cbd/0x138000, compress 0x0/0x0/0x0, omap 0xe79e, meta 0x2bc1862), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:43.534483+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:13.035653+0000 osd.0 (osd.0) 98 : cluster [DBG] 2.1c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:13.046201+0000 osd.0 (osd.0) 99 : cluster [DBG] 2.1c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=0 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000120 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=0 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000048
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000194
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 sudo[249045]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000204 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 704512 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.695102 2 0.000104
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.695358 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.695494 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=103) [0] r=0 lpr=103 pi=[76,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000076 1 0.000111
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 104 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 99)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:13.035653+0000 osd.0 (osd.0) 98 : cluster [DBG] 2.1c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:13.046201+0000 osd.0 (osd.0) 99 : cluster [DBG] 2.1c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:44.534641+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 696320 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:45.534755+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:14.961122+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:14.971686+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 696320 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.657290 5 0.000034
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=76/76 les/c/f=77/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 crt=43'551 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001888 4 0.000085
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000034 1 0.000043
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 lc 42'114 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.063842 1 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 105 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 101)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:14.961122+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:14.971686+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.293019 1 0.000045
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.358866 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.016191 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[76,104)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000306 1 0.000362
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000110 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e(unlocked)] enter Initial
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=0 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=0 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000305 1 0.000030
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000345 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002517 2 0.000215
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000645 2 0.000066
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 106 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:46.535005+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 638976 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001658 2 0.000109
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002487 2 0.000052
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002885 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.002926 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004997 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=104/105 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=106) [0] r=0 lpr=106 pi=[64,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000554 1 0.000662
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000094 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=104/76 les/c/f=105/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 107 handle_osd_map epochs [106,107], i have 107, src has [1,107]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=106/76 les/c/f=107/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001656 3 0.000808
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=106/76 les/c/f=107/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=106/76 les/c/f=107/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000033 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 107 pg[9.1c( v 43'551 (0'0,43'551] local-lis/les=106/107 n=6 ec=47/37 lis/c=106/76 les/c/f=107/77/0 sis=106) [0] r=0 lpr=106 pi=[76,106)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:47.535106+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823177 data_alloc: 218103808 data_used: 5513
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:48.535208+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.806337357s of 10.828322411s, submitted: 38
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.571539 5 0.000226
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 0'0 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=64/64 les/c/f=65/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 crt=43'551 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001976 4 0.000078
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000249 1 0.000060
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 lc 43'291 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042653 1 0.000027
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 108 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1646592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fcee0000/0x0/0x4ffc00000, data 0x9e673/0x14a000, compress 0x0/0x0/0x0, omap 0xf455, meta 0x2bc0bab), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 108 handle_osd_map epochs [108,109], i have 109, src has [1,109]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.394809 1 0.000032
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started/ReplicaActive 0.440061 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 active+remapped mbc={}] exit Started 2.011887 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[64,107)/1 pct=0'0 crt=43'551 active+remapped mbc={}] enter Reset
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 pct=0'0 crt=43'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000568
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Start
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001479 2 0.000262
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=0/0 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Dec 13 07:36:33 compute-0 ceph-osd[85140]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000290 2 0.000071
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 109 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:49.535316+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1613824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004879 2 0.000200
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007102 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=107/108 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=107/64 les/c/f=108/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=109/64 les/c/f=110/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000891 3 0.000317
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=109/64 les/c/f=110/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=109/64 les/c/f=110/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 pg_epoch: 110 pg[9.1e( v 43'551 (0'0,43'551] local-lis/les=109/110 n=6 ec=47/37 lis/c=109/64 les/c/f=110/65/0 sis=109) [0] r=0 lpr=109 pi=[64,109)/1 crt=43'551 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:50.535467+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa3626/0x154000, compress 0x0/0x0/0x0, omap 0xfbf6, meta 0x2bc040a), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:51.535575+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa3626/0x154000, compress 0x0/0x0/0x0, omap 0xfbf6, meta 0x2bc040a), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:52.535696+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840973 data_alloc: 218103808 data_used: 5513
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:53.535802+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:22.964997+0000 osd.0 (osd.0) 102 : cluster [DBG] 5.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:22.975498+0000 osd.0 (osd.0) 103 : cluster [DBG] 5.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 103)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:22.964997+0000 osd.0 (osd.0) 102 : cluster [DBG] 5.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:22.975498+0000 osd.0 (osd.0) 103 : cluster [DBG] 5.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:54.535948+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:23.997224+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:24.007772+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 105)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:23.997224+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:24.007772+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fced8000/0x0/0x4ffc00000, data 0xa3626/0x154000, compress 0x0/0x0/0x0, omap 0xfbf6, meta 0x2bc040a), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 111 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:55.536097+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:56.536213+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1531904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:57.536314+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:27.066541+0000 osd.0 (osd.0) 106 : cluster [DBG] 3.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:27.077201+0000 osd.0 (osd.0) 107 : cluster [DBG] 3.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 1531904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859894 data_alloc: 218103808 data_used: 5765
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 107)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:27.066541+0000 osd.0 (osd.0) 106 : cluster [DBG] 3.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:27.077201+0000 osd.0 (osd.0) 107 : cluster [DBG] 3.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:58.536488+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:28.092948+0000 osd.0 (osd.0) 108 : cluster [DBG] 5.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:28.103544+0000 osd.0 (osd.0) 109 : cluster [DBG] 5.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1523712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.342509270s of 10.362545013s, submitted: 34
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 109)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:28.092948+0000 osd.0 (osd.0) 108 : cluster [DBG] 5.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:28.103544+0000 osd.0 (osd.0) 109 : cluster [DBG] 5.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:16:59.536649+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:00.536759+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:01.536865+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:31.139853+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:31.150401+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 1499136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 111)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:31.139853+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:31.150401+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:02.537025+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:32.128869+0000 osd.0 (osd.0) 112 : cluster [DBG] 3.c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:32.139279+0000 osd.0 (osd.0) 113 : cluster [DBG] 3.c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868973 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 113)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:32.128869+0000 osd.0 (osd.0) 112 : cluster [DBG] 3.c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:32.139279+0000 osd.0 (osd.0) 113 : cluster [DBG] 3.c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:03.537195+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:33.158418+0000 osd.0 (osd.0) 114 : cluster [DBG] 7.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:33.168955+0000 osd.0 (osd.0) 115 : cluster [DBG] 7.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 115)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:33.158418+0000 osd.0 (osd.0) 114 : cluster [DBG] 7.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:33.168955+0000 osd.0 (osd.0) 115 : cluster [DBG] 7.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:04.537345+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:05.537489+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:35.182806+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:35.193361+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 117)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:35.182806+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:35.193361+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:06.537641+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:36.165921+0000 osd.0 (osd.0) 118 : cluster [DBG] 3.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:36.176519+0000 osd.0 (osd.0) 119 : cluster [DBG] 3.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1433600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 119)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:36.165921+0000 osd.0 (osd.0) 118 : cluster [DBG] 3.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:36.176519+0000 osd.0 (osd.0) 119 : cluster [DBG] 3.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:07.537799+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1425408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876208 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:08.537912+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _renew_subs
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:09.538026+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:39.245679+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:39.256278+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.930002213s of 10.942367554s, submitted: 13
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 121)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:39.245679+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:39.256278+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:10.538175+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:40.207495+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:40.218042+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1392640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 123)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:40.207495+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:40.218042+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:11.538315+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1392640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:12.538484+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 1392640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881034 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:13.538617+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1384448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:14.538733+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1384448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:15.538834+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1368064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:16.538979+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1368064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:17.539092+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1351680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881034 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:18.539213+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1351680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:19.539323+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:49.187163+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:49.197731+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.006848335s of 10.009189606s, submitted: 4
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 125)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:49.187163+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:49.197731+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:20.539477+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:50.216662+0000 osd.0 (osd.0) 126 : cluster [DBG] 3.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:50.227264+0000 osd.0 (osd.0) 127 : cluster [DBG] 3.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1327104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 127)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:50.216662+0000 osd.0 (osd.0) 126 : cluster [DBG] 3.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:50.227264+0000 osd.0 (osd.0) 127 : cluster [DBG] 3.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:21.539613+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1327104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:22.539718+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1327104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885858 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:23.539849+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1302528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:24.539954+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1302528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:25.540059+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:26.540218+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:27.540323+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:57.286235+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.1d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:57.296940+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.1d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1277952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 888271 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 129)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:57.286235+0000 osd.0 (osd.0) 128 : cluster [DBG] 8.1d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:57.296940+0000 osd.0 (osd.0) 129 : cluster [DBG] 8.1d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:28.540458+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1277952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:29.540555+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:59.286196+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:17:59.296798+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1269760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 131)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:59.286196+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:17:59.296798+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:30.540679+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1269760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.043703079s of 11.047969818s, submitted: 6
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:31.540785+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:01.264590+0000 osd.0 (osd.0) 132 : cluster [DBG] 11.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:01.275171+0000 osd.0 (osd.0) 133 : cluster [DBG] 11.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 133)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:01.264590+0000 osd.0 (osd.0) 132 : cluster [DBG] 11.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:01.275171+0000 osd.0 (osd.0) 133 : cluster [DBG] 11.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:32.540920+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 893099 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:33.541018+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1253376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:34.541120+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:04.242747+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:04.252898+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:35.541300+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 135)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:04.242747+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.1f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:04.252898+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.1f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:36.541458+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:06.287841+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:06.298454+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 137)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:06.287841+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.19 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:06.298454+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.19 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1212416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:37.541626+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1212416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897927 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:38.541763+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:08.298274+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.1a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:08.308971+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.1a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 139)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:08.298274+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.1a scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:08.308971+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.1a scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1196032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:39.541933+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:09.342578+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:09.353174+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 141)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:09.342578+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.18 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:09.353174+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.18 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1196032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:40.542118+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:10.371047+0000 osd.0 (osd.0) 142 : cluster [DBG] 10.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:10.381061+0000 osd.0 (osd.0) 143 : cluster [DBG] 10.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 143)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:10.371047+0000 osd.0 (osd.0) 142 : cluster [DBG] 10.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:10.381061+0000 osd.0 (osd.0) 143 : cluster [DBG] 10.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:41.542279+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:42.542373+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905168 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:43.542465+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:44.542576+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1171456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:45.542670+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:46.542780+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:47.542873+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905168 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:48.543003+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.189657211s of 18.197418213s, submitted: 12
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:49.543110+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:19.462059+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:19.472641+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 145)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:19.462059+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:19.472641+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:50.543288+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:20.472728+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:20.483311+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 147)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:20.472728+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:20.483311+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:51.543468+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:52.543579+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1130496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909994 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:53.543675+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:54.543757+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:24.513600+0000 osd.0 (osd.0) 148 : cluster [DBG] 8.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:24.524192+0000 osd.0 (osd.0) 149 : cluster [DBG] 8.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 149)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:24.513600+0000 osd.0 (osd.0) 148 : cluster [DBG] 8.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:24.524192+0000 osd.0 (osd.0) 149 : cluster [DBG] 8.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:55.543852+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 150 sent 149 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:25.536746+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 150)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:25.536746+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:56.543968+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 151 sent 150 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:25.547231+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 151)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:25.547231+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:57.544099+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917229 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:58.544206+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:27.595485+0000 osd.0 (osd.0) 152 : cluster [DBG] 11.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:27.606061+0000 osd.0 (osd.0) 153 : cluster [DBG] 11.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 153)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:27.595485+0000 osd.0 (osd.0) 152 : cluster [DBG] 11.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:27.606061+0000 osd.0 (osd.0) 153 : cluster [DBG] 11.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:17:59.544334+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:00.544433+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:01.544552+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1114112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:02.544663+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917229 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:03.544780+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:04.544919+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.093047142s of 15.100746155s, submitted: 10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:05.545018+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:34.562849+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:34.573416+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 155)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:34.562849+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.7 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:34.573416+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.7 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:06.545151+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:07.545251+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:36.557843+0000 osd.0 (osd.0) 156 : cluster [DBG] 8.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:36.568573+0000 osd.0 (osd.0) 157 : cluster [DBG] 8.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 157)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:36.557843+0000 osd.0 (osd.0) 156 : cluster [DBG] 8.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:36.568573+0000 osd.0 (osd.0) 157 : cluster [DBG] 8.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922053 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:08.545369+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:38.527946+0000 osd.0 (osd.0) 158 : cluster [DBG] 11.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:38.538526+0000 osd.0 (osd.0) 159 : cluster [DBG] 11.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 159)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:38.527946+0000 osd.0 (osd.0) 158 : cluster [DBG] 11.14 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:38.538526+0000 osd.0 (osd.0) 159 : cluster [DBG] 11.14 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1064960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:09.545476+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1056768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:10.545571+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:40.529352+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:40.539888+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 161)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:40.529352+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:40.539888+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:11.545711+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:12.545822+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:41.552773+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.8 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:41.563473+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.8 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 163)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:41.552773+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.8 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:41.563473+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.8 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929294 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:13.545967+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:14.546066+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1032192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:15.546156+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1024000 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:16.546310+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.032814980s of 12.039576530s, submitted: 10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:17.546417+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:46.602465+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:46.613016+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 991232 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934122 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 165)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:46.602465+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.4 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:46.613016+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.4 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:18.546556+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:47.638555+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:47.649010+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 974848 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 167)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:47.638555+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.17 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:47.649010+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.17 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:19.546677+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:20.546779+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:21.546882+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:50.675429+0000 osd.0 (osd.0) 168 : cluster [DBG] 10.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:50.686142+0000 osd.0 (osd.0) 169 : cluster [DBG] 10.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 169)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:50.675429+0000 osd.0 (osd.0) 168 : cluster [DBG] 10.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:50.686142+0000 osd.0 (osd.0) 169 : cluster [DBG] 10.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:22.547010+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:51.643061+0000 osd.0 (osd.0) 170 : cluster [DBG] 10.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:51.653653+0000 osd.0 (osd.0) 171 : cluster [DBG] 10.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941363 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 171)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:51.643061+0000 osd.0 (osd.0) 170 : cluster [DBG] 10.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:51.653653+0000 osd.0 (osd.0) 171 : cluster [DBG] 10.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:23.547158+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:52.594708+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.10 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:52.605309+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.10 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 901120 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 173)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:52.594708+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.10 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:52.605309+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.10 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:24.547300+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:25.547405+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:54.647503+0000 osd.0 (osd.0) 174 : cluster [DBG] 11.10 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:54.658051+0000 osd.0 (osd.0) 175 : cluster [DBG] 11.10 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 175)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:54.647503+0000 osd.0 (osd.0) 174 : cluster [DBG] 11.10 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:54.658051+0000 osd.0 (osd.0) 175 : cluster [DBG] 11.10 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:26.547548+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:55.605019+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:18:55.615590+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 876544 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 177)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:55.605019+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:18:55.615590+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:27.547686+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 876544 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946191 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:28.547789+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 868352 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:29.547889+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:30.548020+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:31.548125+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.055186272s of 15.064337730s, submitted: 14
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 851968 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:32.548238+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:01.666489+0000 osd.0 (osd.0) 178 : cluster [DBG] 10.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:01.680588+0000 osd.0 (osd.0) 179 : cluster [DBG] 10.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 851968 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951017 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 179)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:01.666489+0000 osd.0 (osd.0) 178 : cluster [DBG] 10.e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:01.680588+0000 osd.0 (osd.0) 179 : cluster [DBG] 10.e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:33.548362+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:02.629782+0000 osd.0 (osd.0) 180 : cluster [DBG] 10.d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:02.643904+0000 osd.0 (osd.0) 181 : cluster [DBG] 10.d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 843776 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 181)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:02.629782+0000 osd.0 (osd.0) 180 : cluster [DBG] 10.d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:02.643904+0000 osd.0 (osd.0) 181 : cluster [DBG] 10.d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:34.548505+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 843776 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:35.548619+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 843776 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:36.548737+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 835584 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:37.548853+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953428 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:38.548962+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:07.567125+0000 osd.0 (osd.0) 182 : cluster [DBG] 8.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:07.584768+0000 osd.0 (osd.0) 183 : cluster [DBG] 8.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 183)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:07.567125+0000 osd.0 (osd.0) 182 : cluster [DBG] 8.f scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:07.584768+0000 osd.0 (osd.0) 183 : cluster [DBG] 8.f scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:39.549141+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:40.549239+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:41.549388+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:42.549488+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953428 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:43.549628+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:44.549740+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.889550209s of 12.894322395s, submitted: 6
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:45.549832+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:14.561180+0000 osd.0 (osd.0) 184 : cluster [DBG] 8.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:14.575303+0000 osd.0 (osd.0) 185 : cluster [DBG] 8.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 185)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:14.561180+0000 osd.0 (osd.0) 184 : cluster [DBG] 8.6 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:14.575303+0000 osd.0 (osd.0) 185 : cluster [DBG] 8.6 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:46.549975+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:15.611061+0000 osd.0 (osd.0) 186 : cluster [DBG] 10.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:15.624863+0000 osd.0 (osd.0) 187 : cluster [DBG] 10.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 187)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:15.611061+0000 osd.0 (osd.0) 186 : cluster [DBG] 10.15 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:15.624863+0000 osd.0 (osd.0) 187 : cluster [DBG] 10.15 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:47.550118+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:16.628130+0000 osd.0 (osd.0) 188 : cluster [DBG] 8.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:16.642276+0000 osd.0 (osd.0) 189 : cluster [DBG] 8.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963078 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 189)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:16.628130+0000 osd.0 (osd.0) 188 : cluster [DBG] 8.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:16.642276+0000 osd.0 (osd.0) 189 : cluster [DBG] 8.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:48.550239+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:17.654366+0000 osd.0 (osd.0) 190 : cluster [DBG] 10.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:17.668484+0000 osd.0 (osd.0) 191 : cluster [DBG] 10.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 191)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:17.654366+0000 osd.0 (osd.0) 190 : cluster [DBG] 10.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:17.668484+0000 osd.0 (osd.0) 191 : cluster [DBG] 10.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:49.550361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:50.550469+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:51.550565+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:20.631995+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:20.663773+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 193)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:20.631995+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.1e scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:20.663773+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.1e scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:52.550691+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965491 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:53.550865+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:54.550974+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:55.551116+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.031267166s of 11.038361549s, submitted: 10
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:56.551277+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:25.599505+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:25.641831+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 195)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:25.599505+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1c scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:25.641831+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1c scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:57.551459+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967904 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:58.551579+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:18:59.551713+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:00.551852+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:01.551952+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:30.591018+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:30.612218+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 197)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:30.591018+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:30.612218+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:02.552083+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:31.573494+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:31.601784+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975141 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 199)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:31.573494+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:31.601784+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:03.552267+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:32.595583+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:32.634423+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 201)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:32.595583+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.3 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:32.634423+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.3 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:04.552484+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:05.552616+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.005068779s of 10.010265350s, submitted: 8
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:06.552775+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:35.609809+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:35.648701+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 203)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:35.609809+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.d scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:35.648701+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.d scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:07.552929+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977552 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:08.553064+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:09.553200+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:10.553293+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:39.688589+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:39.730866+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 205)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:39.688589+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:39.730866+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:11.553426+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:12.553581+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979963 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:13.553691+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:14.553795+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:15.553894+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:16.554013+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:17.554111+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979963 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:18.554207+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:19.554301+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.003181458s of 14.005554199s, submitted: 4
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:20.554459+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:49.615381+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:49.643573+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 207)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:49.615381+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.9 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:49.643573+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.9 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:21.554592+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:50.613393+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:19:50.638096+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 209)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:50.613393+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.16 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:19:50.638096+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.16 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:22.554709+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984787 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:23.554801+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:24.554901+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 548864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:25.554995+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 548864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:26.555149+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:27.555251+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984787 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:28.555364+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:29.555499+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:30.555598+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:31.555738+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:32.555839+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 483328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984787 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:33.555945+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:34.556082+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:35.556203+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:36.556329+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:37.556444+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984787 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:38.556567+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:39.556717+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:40.556820+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:41.556923+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:42.557060+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984787 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:43.557171+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.890445709s of 24.893344879s, submitted: 4
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:44.557290+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:20:14.508700+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:20:14.532927+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 211)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:20:14.508700+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.b scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:20:14.532927+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.b scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:45.557472+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:46.557637+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 212 sent 211 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:20:16.528868+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 212)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:20:16.528868+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.5 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:47.557772+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 1 last_log 213 sent 212 num 1 unsent 1 sending 1
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:20:16.571225+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 213)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:20:16.571225+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.5 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989609 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:48.557911+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 442368 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:49.558012+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:50.558117+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:51.558218+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:52.558318+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:20:21.609324+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.11 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  will send 2025-12-13T07:20:21.644556+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.11 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client handle_log_ack log(last 215)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:20:21.609324+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.11 scrub starts
Dec 13 07:36:33 compute-0 ceph-osd[85140]: log_client  logged 2025-12-13T07:20:21.644556+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.11 scrub ok
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:53.558629+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:54.558719+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:55.558806+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:56.558916+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 409600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:57.559016+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 409600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:58.559128+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 401408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:19:59.559246+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:00.559347+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:01.559476+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:02.559582+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:03.559724+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 368640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:04.559851+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:05.559974+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:06.560119+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:07.560272+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:08.560395+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:09.560523+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:10.560635+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:11.560739+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:12.560848+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 335872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:13.560936+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 335872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:14.561026+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:15.561138+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:16.561324+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 319488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:17.561488+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 319488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:18.561629+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:19.561777+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:20.561918+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:21.562075+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:22.562194+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:23.562312+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:24.562419+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:25.562590+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:26.562737+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:27.562880+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:28.563012+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:29.563177+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:30.563265+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:31.563394+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:32.563530+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:33.563683+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:34.563806+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:35.563920+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:36.564075+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:37.564208+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:38.564361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:39.564504+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 253952 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:40.564642+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:41.564750+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 245760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:42.564875+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:43.564980+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:44.565081+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:45.565178+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:46.565299+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:47.565424+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:48.565569+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:49.565688+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:50.565804+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:51.565945+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:52.566079+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:53.566208+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:54.566349+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:55.566454+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:56.566565+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:57.566659+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:58.566770+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:20:59.566873+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:00.566977+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:01.567092+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:02.567194+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:03.567291+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:04.567394+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:05.567470+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:06.567587+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:07.567714+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:08.567815+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:09.567924+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:10.568038+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 114688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:11.568143+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 114688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:12.568259+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 114688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:13.568375+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:14.568487+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:15.568590+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:16.568748+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:17.568856+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:18.568977+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:19.569082+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:20.569233+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:21.569388+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:22.569506+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:23.569606+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:24.569727+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:25.569836+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:26.569994+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 73728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:27.570133+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 65536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:28.570273+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:29.570410+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:30.570470+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 57344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:31.570575+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:32.570688+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 49152 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:33.571305+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 40960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:34.571473+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 40960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:35.571587+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 40960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:36.571734+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:37.571874+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 32768 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:38.572000+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:39.572120+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:40.572226+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:41.572333+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:42.572461+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:43.572563+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:44.572680+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:45.572788+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:46.572907+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:47.573028+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:48.573135+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:49.573242+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:50.573348+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:51.573478+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:52.573586+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:53.573689+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:54.573794+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:55.573894+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:56.574018+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:57.574122+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:58.574246+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:21:59.574660+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:00.574792+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:01.574888+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:02.575019+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:03.575137+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:04.575294+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:05.575422+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:06.575752+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:07.575917+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:08.576077+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:09.576209+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:10.576358+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:11.576486+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:12.576611+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:13.576681+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:14.576812+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:15.576934+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:16.577078+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:17.577202+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:18.577538+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:19.577652+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:20.577769+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:21.577883+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:22.578011+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 925696 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:23.578160+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 925696 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:24.578287+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:25.578469+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:26.578645+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 917504 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:27.578804+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 909312 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:28.578933+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 909312 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:29.579057+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 901120 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:30.579176+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 901120 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:31.579307+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 901120 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:32.579471+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 892928 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:33.579659+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 892928 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:34.579766+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:35.580028+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:36.580176+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:37.580312+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:38.580422+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:39.580558+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:40.580719+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:41.580843+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:42.580948+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 851968 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:43.581064+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 843776 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:44.581169+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 835584 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:45.581267+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 835584 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:46.581390+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 835584 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:47.581484+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 819200 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:48.581592+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 819200 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:49.581696+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 819200 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:50.581799+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 811008 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:51.581911+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 811008 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:52.582011+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 811008 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:53.582113+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 802816 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:54.582215+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 802816 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:55.582310+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:56.582425+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:57.582534+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:58.582639+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:22:59.582737+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:00.582841+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:01.582951+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:02.583069+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:03.583162+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:04.583260+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:05.583364+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 761856 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:06.583479+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 761856 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:07.583590+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 753664 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:08.583705+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 753664 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:09.584243+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 753664 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:10.584373+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 745472 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:11.584546+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 745472 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:12.584655+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:13.584798+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:14.584941+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:15.585086+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:16.585206+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:17.585322+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:18.585443+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:19.585545+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:20.585640+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 704512 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:21.585744+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 704512 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:22.585841+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:23.585942+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:24.586056+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 712704 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:25.586219+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 704512 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:26.586371+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 704512 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:27.586493+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:28.586603+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:29.586711+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:30.586820+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:31.586922+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 688128 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:32.587020+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 688128 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:33.587138+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:34.587248+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:35.587349+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:36.587469+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 671744 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:37.587599+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 671744 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:38.587705+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 663552 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:39.587823+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 663552 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:40.587924+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 655360 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:41.588024+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 647168 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5491 writes, 855 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s
                                           Interval WAL: 5491 writes, 855 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:42.588153+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 581632 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:43.588245+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 565248 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:44.588374+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 565248 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:45.588477+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 565248 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:46.588584+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 557056 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:47.588687+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 557056 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:48.588802+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 557056 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:49.588943+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 548864 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:50.589079+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 548864 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:51.589212+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 540672 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:52.589335+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 540672 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:53.589477+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 540672 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:54.589609+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:55.589742+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:56.589853+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:57.589949+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:58.590003+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:23:59.590110+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:00.590207+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:01.590343+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:02.590533+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 499712 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:03.590700+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:04.590853+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:05.591000+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:06.591165+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 475136 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:07.591303+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 475136 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:08.591430+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 458752 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:09.591564+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 458752 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:10.591653+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 450560 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:11.591762+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 450560 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:12.591861+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 450560 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:13.591962+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 442368 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:14.592089+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 442368 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:15.592267+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:16.592477+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:17.592577+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 434176 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:18.592680+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 417792 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:19.592821+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 417792 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:20.592979+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 417792 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:21.593113+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 409600 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:22.593243+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 409600 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:23.593577+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 401408 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:24.593726+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 401408 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:25.593871+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 401408 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:26.593990+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 393216 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:27.594096+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 385024 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:28.594232+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 385024 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:29.594338+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 376832 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:30.594450+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 376832 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:31.594588+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 368640 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:32.594717+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 368640 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:33.594822+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 368640 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:34.594919+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 360448 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:35.595017+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 291.406311035s of 291.411071777s, submitted: 6
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 270336 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:36.595170+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:37.595260+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:38.595355+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:39.595464+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:40.595563+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:41.595656+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:42.595759+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:43.595865+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:44.595964+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:45.596088+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:46.596198+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:47.596304+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:48.596411+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:49.596479+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:50.596579+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:51.596681+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:52.596776+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:53.596868+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:54.596965+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:55.597072+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:56.597195+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:57.597309+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:58.597416+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:24:59.597560+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:00.597691+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:01.597804+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:02.597931+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:03.598035+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:04.598180+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:05.598329+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:06.598501+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:07.598597+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:08.598703+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:09.598837+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:10.598933+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:11.599060+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 532480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:12.599150+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 532480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:13.599252+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:14.599357+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:15.599462+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 516096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:16.599574+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 507904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:17.599668+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 507904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:18.599801+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 499712 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:19.599938+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:20.600033+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 491520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:21.600138+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:22.600242+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 483328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:23.600366+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 475136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:24.600465+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:25.600567+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:26.600717+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:27.600816+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:28.600909+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:29.601021+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:30.601122+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:31.601232+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:32.601333+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:33.601459+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:34.601567+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:35.601734+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:36.601880+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:37.601994+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 409600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:38.602112+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:39.602225+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 393216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:40.602340+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 393216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:41.602483+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:42.602576+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:43.602682+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:44.602777+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:45.602881+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:46.603029+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:47.603125+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:48.603218+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:49.603309+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:50.603414+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:51.603555+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:52.603653+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:53.603766+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:54.603869+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:55.603975+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:56.604113+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:57.604218+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:58.604328+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:25:59.604449+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:00.604565+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:01.604678+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:02.604790+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:03.604902+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:04.605020+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:05.605137+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:06.605266+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:07.605368+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:08.605482+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:09.605593+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:10.605704+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:11.605821+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:12.605921+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:13.606025+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:14.606125+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:15.606238+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:16.606400+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:17.606560+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:18.606676+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:19.606776+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:20.606876+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:21.606978+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:22.607075+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:23.607181+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:24.607278+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:25.607379+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:26.607496+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:27.607599+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:28.607691+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:29.607786+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:30.607904+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:31.608002+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:32.608106+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:33.608214+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:34.608315+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:35.608427+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:36.608570+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:37.608690+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:38.608823+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:39.608981+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:40.609108+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:41.609252+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:42.609367+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:43.609503+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:44.609631+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:45.609727+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:46.609836+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:47.609949+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:48.610048+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:49.610148+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:50.610243+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:51.610334+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:52.610428+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:53.610536+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:54.610647+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:55.610740+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:56.610853+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:57.610971+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:58.611081+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:26:59.611193+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:00.611293+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:01.611395+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:02.611500+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:03.611595+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:04.611688+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:05.611790+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:06.611907+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:07.612006+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:08.612107+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:09.612203+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:10.612299+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:11.612406+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:12.612511+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:13.612612+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:14.612722+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:15.612829+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:16.612948+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:17.613045+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:18.613152+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:19.613258+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:20.613370+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:21.613517+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:22.613625+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:23.613722+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:24.613825+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:25.613971+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:26.614090+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:27.614210+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:28.614300+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:29.614392+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:30.614480+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:31.614569+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:32.614666+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:33.614774+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:34.614885+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:35.614992+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:36.615117+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 253952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:37.615241+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:38.615378+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:39.615538+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:40.615644+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:41.615739+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:42.615885+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:43.615984+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:44.616083+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:45.616212+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:46.616325+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:47.616422+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 245760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:48.616471+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:49.616574+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:50.616664+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:51.616752+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:52.616845+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:53.616947+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:54.617032+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:55.617122+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 237568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:56.617233+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:57.617347+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:58.617456+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:27:59.617554+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:00.617658+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:01.617773+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:02.617877+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:03.617994+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:04.618095+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:05.618188+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:06.618336+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:07.618455+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 221184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:08.618554+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:09.618669+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:10.618775+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:11.618904+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:12.619001+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:13.619103+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:14.619213+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:15.619305+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 204800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:16.619454+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:17.619562+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:18.619660+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:19.619775+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:20.619883+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:21.619989+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:22.620084+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:23.620190+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:24.620297+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:25.620393+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:26.620465+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:27.620558+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:28.620654+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:29.620764+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:30.620857+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:31.620950+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:32.621059+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:33.621165+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:34.621256+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:35.621344+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:36.621476+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:37.621604+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:38.621710+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:39.621835+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:40.621943+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:41.622043+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:42.622157+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:43.622259+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:44.622368+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:45.622475+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:46.622621+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 180224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: mgrc ms_handle_reset ms_handle_reset con 0x557f622d9c00
Dec 13 07:36:33 compute-0 ceph-osd[85140]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4292604849
Dec 13 07:36:33 compute-0 ceph-osd[85140]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4292604849,v1:192.168.122.100:6801/4292604849]
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: get_auth_request con 0x557f63d70400 auth_method 0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: mgrc handle_mgr_configure stats_period=5
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:47.622718+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:48.622863+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:49.622989+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:50.623084+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:51.623211+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:52.623305+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 ms_handle_reset con 0x557f63f85400 session 0x557f64692e00
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: handle_auth_request added challenge on 0x557f65bc7400
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:53.623430+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:54.623569+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:55.623721+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:56.623918+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:57.624055+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:58.624218+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:28:59.624361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:00.624466+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:01.624603+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:02.624712+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:03.624852+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:04.624987+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:05.625123+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:06.625277+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:07.625410+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:08.625550+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:09.625691+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:10.625794+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:11.625928+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:12.626059+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:13.626196+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:14.626332+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:15.626483+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:16.626655+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:17.626776+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:18.626903+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:19.627022+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:20.627152+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:21.627286+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:22.627421+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:23.627561+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:24.627707+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:25.627835+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:26.627986+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:27.628119+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:28.628241+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 ms_handle_reset con 0x557f63ea5000 session 0x557f64afb6c0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: handle_auth_request added challenge on 0x557f665f3400
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:29.628368+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:30.628504+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:31.628619+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:32.628723+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:33.628840+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:34.628947+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992022 data_alloc: 218103808 data_used: 6627
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:35.629054+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1163264 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.044281006s of 300.082519531s, submitted: 106
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: handle_auth_request added challenge on 0x557f63942000
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:36.629193+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:37.629300+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:38.629424+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:39.629568+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:40.629673+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:41.629766+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:42.629872+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:43.629994+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:44.630123+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:45.630294+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:46.630431+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:47.630596+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:48.630734+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:49.630870+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:50.631036+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:51.631214+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:52.631402+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:53.631582+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:54.631768+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:55.631987+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:56.632176+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:57.632386+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:58.632630+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:29:59.632806+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:00.632963+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:01.633100+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:02.633220+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:03.633361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:04.633524+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:05.633690+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:06.633851+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:07.633985+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:08.634128+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:09.634270+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:10.634405+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:11.634538+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:12.634674+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:13.634814+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:14.634953+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:15.635094+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:16.635233+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:17.635354+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:18.635494+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:19.635645+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:20.635785+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:21.635923+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:22.636060+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:23.636193+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:24.636318+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:25.636473+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:26.636619+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:27.636729+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:28.636867+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:29.637006+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:30.637145+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:31.637288+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:32.637422+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:33.637582+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:34.637718+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:35.637854+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:36.637991+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:37.638146+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:38.638277+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:39.638416+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:40.638547+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:41.638697+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:42.638818+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:43.638937+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:44.639048+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:45.639143+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:46.639292+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:47.639430+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:48.640543+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:49.640655+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:50.640793+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:51.640892+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:52.641053+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:53.641192+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:54.641322+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:55.641425+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:56.641588+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:57.641749+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:58.641868+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:30:59.641993+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:00.642124+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:01.642261+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:02.642357+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:03.642490+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:04.642652+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:05.642761+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:06.642886+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:07.643024+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:08.643160+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:09.643270+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:10.643378+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:11.643500+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:12.643619+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:13.643732+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:14.643852+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:15.643956+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:16.644089+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:17.644188+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:18.644315+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:19.644414+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:20.644513+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:21.644606+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:22.644707+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:23.644800+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:24.644914+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:25.645523+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:26.645658+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:27.645814+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:28.645981+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:29.646076+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:30.646219+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:31.646351+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:32.646496+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:33.646671+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:34.646783+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:35.646927+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:36.647114+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:37.647242+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:38.647361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:39.647527+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:40.647671+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:41.647804+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:42.647908+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:43.648040+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:44.648180+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:45.648325+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:46.648470+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:47.648568+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:48.648674+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:49.648773+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:50.648880+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:51.648990+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:52.649121+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:53.649219+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:54.649361+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:55.649492+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:56.649618+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:57.649716+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:58.649837+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:31:59.649966+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:00.650055+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:01.650206+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:02.650328+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:03.650489+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:04.650621+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:05.650746+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:06.650912+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:07.651058+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:08.651183+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:09.651319+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:10.651471+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:11.651581+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:12.651708+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:13.651817+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:14.651952+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:15.652059+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:16.652167+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:17.652274+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:18.652403+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:19.652476+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:20.652570+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:21.652666+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:22.652796+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:23.652884+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:24.652967+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:25.653081+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:26.653221+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:27.653326+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:28.653476+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:29.653607+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:30.653735+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:31.653829+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:32.653967+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:33.654129+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:34.654236+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:35.654375+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:36.654510+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:37.654659+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:38.654883+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:39.655040+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:40.655169+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:41.655295+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:42.655393+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:43.655521+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:44.655664+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:45.655825+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:46.655934+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:47.656076+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:48.656182+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:49.656310+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:50.656407+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:51.656481+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:52.656580+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:53.656692+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:54.656857+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:55.656983+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:56.657114+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:57.657258+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:58.657381+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:32:59.657540+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:00.657689+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:01.657810+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:02.658230+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:03.658355+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:04.658491+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:05.658600+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:06.658740+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:07.658864+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:08.658984+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:09.659143+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:10.659251+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:11.659369+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:12.659488+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:13.659644+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:14.659806+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:15.659953+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:16.660108+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:17.660220+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:18.660378+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:19.660501+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:20.660634+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:21.660725+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:22.660834+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:23.660964+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:24.661101+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:25.661235+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:26.661405+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:27.661530+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:28.661649+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:29.661776+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:30.661932+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:31.662088+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:32.662241+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:33.662335+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:34.662479+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:35.662586+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:36.662718+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:37.662856+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:38.662987+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:39.663117+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:40.663253+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:41.663357+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5739 writes, 24K keys, 5739 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5739 writes, 979 syncs, 5.86 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f62285a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557f622858d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:42.663493+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:43.663632+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:44.663712+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:45.663849+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:46.664036+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:47.664273+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:48.664429+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:49.664564+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:50.664678+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:51.664784+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:52.664886+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:53.664985+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:54.665086+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:55.665187+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:56.665313+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:57.665496+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:58.665636+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:33:59.665772+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:00.665891+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:01.666009+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:02.666133+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:03.666269+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:04.666395+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:05.666545+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:06.666734+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:07.666868+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:08.667005+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:09.667121+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:10.667257+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:11.667370+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:12.667506+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:13.667623+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:14.667727+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:15.667822+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:16.667953+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:17.668105+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:18.668239+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:19.668369+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:20.668504+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:21.668629+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:22.668754+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:23.668853+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:24.668987+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:25.669152+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:26.669316+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:27.669465+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:28.669598+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 786432 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:29.669757+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 786432 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:30.669896+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 786432 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:31.670022+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 786432 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:32.670131+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 786432 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:33.670292+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:34.670426+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:35.670547+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.939239502s of 299.946044922s, submitted: 18
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:36.670709+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:37.670842+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:38.670950+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:39.671079+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:40.671204+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:41.671332+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:42.671490+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:43.671623+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:44.671763+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:45.671896+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:46.672011+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:47.672145+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:48.672283+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:49.672379+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:50.672551+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:51.672745+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:52.672922+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:53.673089+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:54.673218+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:55.673328+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:56.673496+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:57.673601+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:58.673736+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:34:59.673842+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:00.673990+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:01.674096+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:02.674223+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:03.674349+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:04.674474+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:05.674600+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:06.674779+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:07.674913+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 409600 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:08.675053+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:09.675189+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:10.675299+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:11.675409+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:12.675539+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:13.675669+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:14.675778+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:15.675891+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:16.676038+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:17.676123+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:18.676222+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:19.676352+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:20.676482+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:21.676613+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:22.676750+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:23.676881+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:24.677036+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:25.677139+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:26.677276+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:27.677462+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 401408 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:28.677593+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:29.677740+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:30.677875+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:31.678004+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:32.678123+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:33.678252+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:34.678411+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:35.678542+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:36.678721+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:37.678835+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:38.678973+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:39.679063+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:40.679156+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:41.679263+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:42.679382+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:43.679562+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:44.679717+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:45.679868+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:46.680041+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:47.680196+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:48.680478+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 385024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:49.680574+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:50.680668+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:51.680777+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:52.680870+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:53.680964+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:54.681066+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:55.684570+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:56.684659+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:57.684770+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:58.684863+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:35:59.684975+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 376832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 07:36:33 compute-0 ceph-osd[85140]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 07:36:33 compute-0 ceph-osd[85140]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993174 data_alloc: 218103808 data_used: 8011
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:36:00.685077+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 303104 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'config diff' '{prefix=config diff}'
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'config show' '{prefix=config show}'
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:36:01.685190+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 1794048 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: tick
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_tickets
Dec 13 07:36:33 compute-0 ceph-osd[85140]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T07:36:02.685298+0000)
Dec 13 07:36:33 compute-0 ceph-osd[85140]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1671168 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Dec 13 07:36:33 compute-0 ceph-osd[85140]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xabb5e/0x163000, compress 0x0/0x0/0x0, omap 0x108ad, meta 0x2bbf753), peers [1,2] op hist [])
Dec 13 07:36:33 compute-0 ceph-osd[85140]: do_command 'log dump' '{prefix=log dump}'
Dec 13 07:36:33 compute-0 sudo[249180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:36:33 compute-0 sudo[249180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:33 compute-0 sudo[249180]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='client.14516 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 07:36:33 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 13 07:36:33 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309217865' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 07:36:34 compute-0 sudo[249220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 07:36:34 compute-0 sudo[249220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:34 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14524 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} v 0)
Dec 13 07:36:34 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.3361534 +0000 UTC m=+0.042381570 container create 246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 07:36:34 compute-0 systemd[1]: Started libpod-conmon-246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512.scope.
Dec 13 07:36:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.406048502 +0000 UTC m=+0.112276682 container init 246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lalande, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.323149418 +0000 UTC m=+0.029377598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.423394217 +0000 UTC m=+0.129622377 container start 246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.424552816 +0000 UTC m=+0.130780976 container attach 246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 07:36:34 compute-0 silly_lalande[249320]: 167 167
Dec 13 07:36:34 compute-0 systemd[1]: libpod-246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512.scope: Deactivated successfully.
Dec 13 07:36:34 compute-0 conmon[249320]: conmon 246dd973ebd90fd52639 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512.scope/container/memory.events
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.427911631 +0000 UTC m=+0.134139792 container died 246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c1573260595cc422c9ca4ef7df165692b65b7df8be412a3787f928d46ced49-merged.mount: Deactivated successfully.
Dec 13 07:36:34 compute-0 podman[249305]: 2025-12-13 07:36:34.467391906 +0000 UTC m=+0.173620066 container remove 246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 07:36:34 compute-0 systemd[1]: libpod-conmon-246dd973ebd90fd52639ddd13d21c7395249b8763562806eec31e3180cfe5512.scope: Deactivated successfully.
Dec 13 07:36:34 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14528 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 13 07:36:34 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634418473' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 07:36:34 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:34 compute-0 podman[249350]: 2025-12-13 07:36:34.629163173 +0000 UTC m=+0.037708885 container create 27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 07:36:34 compute-0 systemd[1]: Started libpod-conmon-27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319.scope.
Dec 13 07:36:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6436795aee210cda37d2f7c6c716546e68990b3c490fda4b84d2a9139b2ee1c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6436795aee210cda37d2f7c6c716546e68990b3c490fda4b84d2a9139b2ee1c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6436795aee210cda37d2f7c6c716546e68990b3c490fda4b84d2a9139b2ee1c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6436795aee210cda37d2f7c6c716546e68990b3c490fda4b84d2a9139b2ee1c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6436795aee210cda37d2f7c6c716546e68990b3c490fda4b84d2a9139b2ee1c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:34 compute-0 podman[249350]: 2025-12-13 07:36:34.69249767 +0000 UTC m=+0.101043382 container init 27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hamilton, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:36:34 compute-0 podman[249350]: 2025-12-13 07:36:34.617583451 +0000 UTC m=+0.026129182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:36:34 compute-0 podman[249350]: 2025-12-13 07:36:34.715562199 +0000 UTC m=+0.124107900 container start 27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 07:36:34 compute-0 podman[249350]: 2025-12-13 07:36:34.719054886 +0000 UTC m=+0.127600608 container attach 27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 07:36:34 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/309217865' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: from='client.14524 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.kikquh", "name": "rgw_frontends"} : dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: from='client.14528 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3634418473' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 07:36:34 compute-0 ceph-mon[74928]: pgmap v794: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:34 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14530 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:35 compute-0 amazing_hamilton[249371]: --> passed data devices: 0 physical, 3 LVM
Dec 13 07:36:35 compute-0 amazing_hamilton[249371]: --> All data devices are unavailable
Dec 13 07:36:35 compute-0 systemd[1]: libpod-27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319.scope: Deactivated successfully.
Dec 13 07:36:35 compute-0 podman[249350]: 2025-12-13 07:36:35.110031825 +0000 UTC m=+0.518577528 container died 27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 07:36:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 13 07:36:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227552236' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 07:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6436795aee210cda37d2f7c6c716546e68990b3c490fda4b84d2a9139b2ee1c2-merged.mount: Deactivated successfully.
Dec 13 07:36:35 compute-0 podman[249350]: 2025-12-13 07:36:35.157267152 +0000 UTC m=+0.565812854 container remove 27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 07:36:35 compute-0 systemd[1]: libpod-conmon-27f49de7f81130389d87148b1f18c299650bbbcd474243c69da44e439c963319.scope: Deactivated successfully.
Dec 13 07:36:35 compute-0 sudo[249220]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:35 compute-0 sudo[249478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:36:35 compute-0 sudo[249478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:35 compute-0 sudo[249478]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:35 compute-0 sudo[249520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- lvm list --format json
Dec 13 07:36:35 compute-0 sudo[249520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:35 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 13 07:36:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262250495' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.636264958 +0000 UTC m=+0.059971737 container create 380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 07:36:35 compute-0 systemd[1]: Started libpod-conmon-380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650.scope.
Dec 13 07:36:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.683334762 +0000 UTC m=+0.107041561 container init 380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_aryabhata, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.690158042 +0000 UTC m=+0.113864811 container start 380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.691566871 +0000 UTC m=+0.115273651 container attach 380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_aryabhata, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 07:36:35 compute-0 gallant_aryabhata[249605]: 167 167
Dec 13 07:36:35 compute-0 systemd[1]: libpod-380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650.scope: Deactivated successfully.
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.697993234 +0000 UTC m=+0.121700013 container died 380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.617047162 +0000 UTC m=+0.040753962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:36:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 07:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e62783dde167c93a1c4375b0e7217fdc658c5cb6e19493dc6362ce9af9fb085d-merged.mount: Deactivated successfully.
Dec 13 07:36:35 compute-0 podman[249582]: 2025-12-13 07:36:35.722015664 +0000 UTC m=+0.145722443 container remove 380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_aryabhata, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:36:35 compute-0 systemd[1]: libpod-conmon-380b40d1917915dfb59a0d8ed2acacf19821ae89fae0c95ace340b9a76132650.scope: Deactivated successfully.
Dec 13 07:36:35 compute-0 ceph-mon[74928]: from='client.14530 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4227552236' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4262250495' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 07:36:35 compute-0 podman[249651]: 2025-12-13 07:36:35.905895751 +0000 UTC m=+0.052177299 container create 237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 07:36:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 07:36:35 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 07:36:35 compute-0 systemd[1]: Started libpod-conmon-237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27.scope.
Dec 13 07:36:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a5796283c551c5b6b4657aa837ccb8b64bb46df5ed5df47bf780ece80f38d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a5796283c551c5b6b4657aa837ccb8b64bb46df5ed5df47bf780ece80f38d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a5796283c551c5b6b4657aa837ccb8b64bb46df5ed5df47bf780ece80f38d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a5796283c551c5b6b4657aa837ccb8b64bb46df5ed5df47bf780ece80f38d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:35 compute-0 podman[249651]: 2025-12-13 07:36:35.976932421 +0000 UTC m=+0.123213979 container init 237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:36:35 compute-0 podman[249651]: 2025-12-13 07:36:35.892050288 +0000 UTC m=+0.038331847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:36:35 compute-0 podman[249651]: 2025-12-13 07:36:35.992858648 +0000 UTC m=+0.139140185 container start 237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shamir, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:36:35 compute-0 podman[249651]: 2025-12-13 07:36:35.994047542 +0000 UTC m=+0.140329080 container attach 237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 07:36:36 compute-0 reverent_shamir[249686]: {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:     "0": [
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:         {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "devices": [
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "/dev/loop3"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             ],
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_name": "ceph_lv0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_size": "21470642176",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=82d490c1-ea27-486f-9cfe-f392b9710718,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "name": "ceph_lv0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "tags": {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.block_uuid": "tkUjW1-SaXQ-bDe0-0Tiu-0HCu-kU4u-mCyikh",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cluster_name": "ceph",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.crush_device_class": "",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.encrypted": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.objectstore": "bluestore",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osd_fsid": "82d490c1-ea27-486f-9cfe-f392b9710718",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osd_id": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.type": "block",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.vdo": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.with_tpm": "0"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             },
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "type": "block",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "vg_name": "ceph_vg0"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:         }
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:     ],
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:     "1": [
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:         {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "devices": [
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "/dev/loop4"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             ],
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_name": "ceph_lv1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_size": "21470642176",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "name": "ceph_lv1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "tags": {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.block_uuid": "KJfBHD-bTeI-sse3-f5pt-wVe5-1jqy-9DmfqM",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cluster_name": "ceph",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.crush_device_class": "",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.encrypted": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.objectstore": "bluestore",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osd_fsid": "4eea80a2-b970-48bd-bbd0-96f2fa3ec0bf",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osd_id": "1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.type": "block",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.vdo": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.with_tpm": "0"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             },
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "type": "block",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "vg_name": "ceph_vg1"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:         }
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:     ],
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:     "2": [
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:         {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "devices": [
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "/dev/loop5"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             ],
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_name": "ceph_lv2",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_size": "21470642176",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=00fdae1b-7fad-5f1b-8734-ba4d9298a6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b927bbdd-6a1c-42b3-b097-3003acae4885,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "lv_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "name": "ceph_lv2",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "tags": {
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.block_uuid": "zr2qgU-ONrz-Eb12-bE6w-1lGG-zMad-2N6AE1",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cluster_fsid": "00fdae1b-7fad-5f1b-8734-ba4d9298a6de",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.cluster_name": "ceph",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.crush_device_class": "",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.encrypted": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.objectstore": "bluestore",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osd_fsid": "b927bbdd-6a1c-42b3-b097-3003acae4885",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osd_id": "2",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.type": "block",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.vdo": "0",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:                 "ceph.with_tpm": "0"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             },
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "type": "block",
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:             "vg_name": "ceph_vg2"
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:         }
Dec 13 07:36:36 compute-0 reverent_shamir[249686]:     ]
Dec 13 07:36:36 compute-0 reverent_shamir[249686]: }
Dec 13 07:36:36 compute-0 systemd[1]: libpod-237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27.scope: Deactivated successfully.
Dec 13 07:36:36 compute-0 podman[249651]: 2025-12-13 07:36:36.25365503 +0000 UTC m=+0.399936569 container died 237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 07:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-81a5796283c551c5b6b4657aa837ccb8b64bb46df5ed5df47bf780ece80f38d9-merged.mount: Deactivated successfully.
Dec 13 07:36:36 compute-0 podman[249651]: 2025-12-13 07:36:36.287323287 +0000 UTC m=+0.433604824 container remove 237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:36:36 compute-0 systemd[1]: libpod-conmon-237bddd7121dd22e353467739efb8514aa38ebc9076278883436832201276e27.scope: Deactivated successfully.
Dec 13 07:36:36 compute-0 sudo[249520]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:36 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 13 07:36:36 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383203451' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 07:36:36 compute-0 sudo[249738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 07:36:36 compute-0 sudo[249738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:36 compute-0 sudo[249738]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:36 compute-0 sudo[249775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/00fdae1b-7fad-5f1b-8734-ba4d9298a6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 00fdae1b-7fad-5f1b-8734-ba4d9298a6de -- raw list --format json
Dec 13 07:36:36 compute-0 sudo[249775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:36 compute-0 systemd[1]: Starting Hostname Service...
Dec 13 07:36:36 compute-0 systemd[1]: Started Hostname Service.
Dec 13 07:36:36 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.777918097 +0000 UTC m=+0.039433557 container create 7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 13 07:36:36 compute-0 systemd[1]: Started libpod-conmon-7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83.scope.
Dec 13 07:36:36 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14548 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.851708585 +0000 UTC m=+0.113224045 container init 7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.859962875 +0000 UTC m=+0.121478326 container start 7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_austin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.766315932 +0000 UTC m=+0.027831402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:36:36 compute-0 systemd[1]: libpod-7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83.scope: Deactivated successfully.
Dec 13 07:36:36 compute-0 blissful_austin[249880]: 167 167
Dec 13 07:36:36 compute-0 conmon[249880]: conmon 7ef045a8093e37ddfe81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83.scope/container/memory.events
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.865657783 +0000 UTC m=+0.127173233 container attach 7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_austin, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.865859984 +0000 UTC m=+0.127375433 container died 7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 07:36:36 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 07:36:36 compute-0 ceph-mon[74928]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 07:36:36 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/383203451' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 07:36:36 compute-0 ceph-mon[74928]: pgmap v795: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc7b8449204da5322c2256cbdb7e34c3cd08a36bac73ce34ddd7064160553db4-merged.mount: Deactivated successfully.
Dec 13 07:36:36 compute-0 podman[249862]: 2025-12-13 07:36:36.908049272 +0000 UTC m=+0.169564721 container remove 7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_austin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 07:36:36 compute-0 systemd[1]: libpod-conmon-7ef045a8093e37ddfe81fb914959b324beccf6e9409d5e86e4bc2d506d3e7a83.scope: Deactivated successfully.
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.12279152 +0000 UTC m=+0.047017026 container create 404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_aryabhata, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 07:36:37 compute-0 systemd[1]: Started libpod-conmon-404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94.scope.
Dec 13 07:36:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 07:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf735cd2afa77a110740d8bea3c1686099be5dc68bd2760ad55048f93416a7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf735cd2afa77a110740d8bea3c1686099be5dc68bd2760ad55048f93416a7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf735cd2afa77a110740d8bea3c1686099be5dc68bd2760ad55048f93416a7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf735cd2afa77a110740d8bea3c1686099be5dc68bd2760ad55048f93416a7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.1878562 +0000 UTC m=+0.112081716 container init 404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.09598201 +0000 UTC m=+0.020207506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.200129848 +0000 UTC m=+0.124355344 container start 404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_aryabhata, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.201229926 +0000 UTC m=+0.125455422 container attach 404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_aryabhata, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 07:36:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 13 07:36:37 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136904966' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 07:36:37 compute-0 blissful_aryabhata[249952]: {}
Dec 13 07:36:37 compute-0 lvm[250082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 07:36:37 compute-0 lvm[250082]: VG ceph_vg2 finished
Dec 13 07:36:37 compute-0 lvm[250077]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 07:36:37 compute-0 lvm[250077]: VG ceph_vg0 finished
Dec 13 07:36:37 compute-0 systemd[1]: libpod-404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94.scope: Deactivated successfully.
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.806201108 +0000 UTC m=+0.730426605 container died 404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_aryabhata, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 07:36:37 compute-0 lvm[250081]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 07:36:37 compute-0 lvm[250081]: VG ceph_vg1 finished
Dec 13 07:36:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 07:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf735cd2afa77a110740d8bea3c1686099be5dc68bd2760ad55048f93416a7e-merged.mount: Deactivated successfully.
Dec 13 07:36:37 compute-0 podman[249934]: 2025-12-13 07:36:37.857963317 +0000 UTC m=+0.782188813 container remove 404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 07:36:37 compute-0 systemd[1]: libpod-conmon-404541e0807a459ae6ec7f22f774933b836db8dd393ed1f5af5f1eac4d23db94.scope: Deactivated successfully.
Dec 13 07:36:37 compute-0 ceph-mon[74928]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:37 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3136904966' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 07:36:37 compute-0 sudo[249775]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 07:36:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:37 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 07:36:37 compute-0 ceph-mon[74928]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:37 compute-0 podman[250063]: 2025-12-13 07:36:37.920364168 +0000 UTC m=+0.179607024 container health_status d4b07d1867f144077f7c5c5cc5ae5c3e4d24058947898d6fee77240ec3efb8ed (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 07:36:38 compute-0 sudo[250109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 07:36:38 compute-0 sudo[250109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 07:36:38 compute-0 sudo[250109]: pam_unix(sudo:session): session closed for user root
Dec 13 07:36:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 13 07:36:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516741696' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 07:36:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Optimize plan auto_2025-12-13_07:36:38
Dec 13 07:36:38 compute-0 ceph-mgr[75200]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 07:36:38 compute-0 ceph-mgr[75200]: [balancer INFO root] do_upmap
Dec 13 07:36:38 compute-0 ceph-mgr[75200]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms']
Dec 13 07:36:38 compute-0 ceph-mgr[75200]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 07:36:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 13 07:36:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2630278703' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 07:36:38 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:38 compute-0 ceph-mon[74928]: from='mgr.14122 192.168.122.100:0/3613703998' entity='mgr.compute-0.qsherl' 
Dec 13 07:36:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2516741696' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 07:36:38 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2630278703' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 07:36:38 compute-0 ceph-mon[74928]: pgmap v796: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:38 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 13 07:36:38 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218532462' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 07:36:39 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14558 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:39 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 13 07:36:39 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524384979' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 07:36:39 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/4218532462' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 07:36:39 compute-0 ceph-mon[74928]: from='client.14558 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:39 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/3524384979' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 07:36:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 13 07:36:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221720985' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 07:36:40 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:40 compute-0 ceph-mgr[75200]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:40 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 13 07:36:40 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576928420' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 07:36:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/2221720985' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 07:36:40 compute-0 ceph-mon[74928]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:40 compute-0 ceph-mon[74928]: pgmap v797: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 07:36:40 compute-0 ceph-mon[74928]: from='client.? 192.168.122.100:0/1576928420' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 07:36:41 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:41 compute-0 ceph-mgr[75200]: log_channel(audit) log [DBG] : from='client.14570 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:36:41.646 154121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 07:36:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:36:41.647 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 07:36:41 compute-0 ovn_metadata_agent[154116]: 2025-12-13 07:36:41.647 154121 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 07:36:41 compute-0 ceph-mon[74928]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:41 compute-0 ceph-mon[74928]: from='client.14570 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 07:36:41 compute-0 ceph-mon[74928]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 13 07:36:41 compute-0 ceph-mon[74928]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052944872' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
